LWN: Comments on "Fedora floats AI-assisted contributions policy" https://lwn.net/Articles/1039623/ This is a special feed containing comments posted to the individual LWN article titled "Fedora floats AI-assisted contributions policy". en-us Thu, 09 Oct 2025 11:16:20 +0000 Thu, 09 Oct 2025 11:16:20 +0000 https://www.rssboard.org/rss-specification lwn@lwn.net How to check copyright? https://lwn.net/Articles/1040915/ https://lwn.net/Articles/1040915/ pizza <div class="FormattedComment"> <span class="QuotedText">&gt; Hence why contributors need a way to check copyright compliance.</span><br> <p> This is a legal problem, and cannot be solved via (purely, or even mostly) technical means.<br> <p> <p> <p> </div> Mon, 06 Oct 2025 14:36:45 +0000 How to check copyright? https://lwn.net/Articles/1040914/ https://lwn.net/Articles/1040914/ farnz TBF, you also need such a mechanism to check copyright compliance of any code you've written yourself - you are also quite capable of accidental infringement (where having seen a particular way to write code before, you copy it unintentionally), and to defend yourself or the project you contribute to, you have to prove either that you never saw the original code that you're alleged to have copied (the clean room route) or that this code is on the "idea" side of the idea-expression distinction (however that's expressed in local law). Mon, 06 Oct 2025 14:29:19 +0000 How to check copyright? https://lwn.net/Articles/1040912/ https://lwn.net/Articles/1040912/ stefanha <div class="FormattedComment"> <span class="QuotedText">&gt; If I had an LLM and found myself sued like that, I'd certainly want to drag the querier into it ...</span><br> <p> Hence why contributors need a way to check copyright compliance.<br> </div> Mon, 06 Oct 2025 14:24:26 +0000 How to check copyright? https://lwn.net/Articles/1040855/ https://lwn.net/Articles/1040855/ kleptog <div class="FormattedComment"> While the issue of memorisation is interesting, it is ultimately not really relevant to the discussion. You don't need an LLM to intentionally violate copyright. The issue is can you use an LLM to *unintentionally* violate copyright?<br> <p> I think those papers actually show it is quite hard. Because even with very specific prompting, the majority of texts could not be recovered to any significant degree. So what are the chances an LLM will reproduce a literal text without special prompting?<br> <p> Mathematically speaking an LLM is just a function, and for every output there exists an input that will produce something close to it. Even if it is just "Repeat X". (Well, technically I don't know if we know that LLMs have a dense output space.) What are the chances a random person will hit one of those inputs that matches some copyrighted output?<br> <p> I suppose we've given the "infinite monkeys" a power-tool that makes it more likely for them to reproduce Shakespeare. Is it too likely?<br> </div> Sun, 05 Oct 2025 13:45:28 +0000 How to check copyright? https://lwn.net/Articles/1040850/ https://lwn.net/Articles/1040850/ pbonzini <div class="FormattedComment"> In prediction model they are not able to reproduce *all of it* but they can reproduce a lot of specific texts with varying degrees of precision. For literary works, for example, the models often remember more easily poetry than prose. You can measure precision by checking if the model needs to be told the first few words as opposed to just the title, how often they change a word with a synonym, whether they go into the weeds after a few paragraphs or a few chapters in others. The same is true of programs.<br> <p> You can also use the language models as source of probabilities for arithmetic coding and some texts will compress ridiculously well, so much that the only explanation is that large parts of the text is already present in the weights in compressed form. In fact it can be mathematically proven that memorization, compression and training are essentially the same thing.<br> <p> Here is a paper from DeepMind on the memorization capabilities of LLMs: <a href="https://arxiv.org/pdf/2507.05578">https://arxiv.org/pdf/2507.05578</a><br> <p> And here is an earlier one that analyzed how memorization improves as the number of parameters grows: <a href="https://arxiv.org/pdf/2202.07646">https://arxiv.org/pdf/2202.07646</a><br> </div> Sun, 05 Oct 2025 10:07:47 +0000 How to check copyright? https://lwn.net/Articles/1040842/ https://lwn.net/Articles/1040842/ mb <div class="FormattedComment"> <span class="QuotedText">&gt;a lossy representation of the training material, but they still contain it and are able to reproduce it</span><br> <p> This contradicts itself.<br> </div> Sat, 04 Oct 2025 19:54:10 +0000 How to check copyright? https://lwn.net/Articles/1040841/ https://lwn.net/Articles/1040841/ pbonzini <div class="FormattedComment"> Network weights are a lossy representation of the training material, but they still contain it and are able to reproduce it if asked, as shown above (and also in the New York Times lawsuit against OpenAI).<br> <p> In fact, bigger models also increase the memorization ability.<br> </div> Sat, 04 Oct 2025 19:51:14 +0000 How to check copyright? https://lwn.net/Articles/1040794/ https://lwn.net/Articles/1040794/ mb <div class="FormattedComment"> <span class="QuotedText">&gt;two separate, non-interacting, teams</span><br> <p> The two teams are interacting. Via documentation.<br> Which is IMO not that dissimilar from the network weights, which are passed from the network trainer application to the network executor application.<br> <p> <span class="QuotedText">&gt;Since by hypothesis the LLM had access to all the code on github</span><br> <p> I don't agree.<br> The training application had access to the code.<br> And the executing application doesn't have access to the code.<br> <p> The generated code comes out of the executing application.<br> <p> </div> Fri, 03 Oct 2025 18:22:53 +0000 How to check copyright? https://lwn.net/Articles/1040789/ https://lwn.net/Articles/1040789/ farnz It definitely can be used to write the new program; because it had access to the code on GitHub, you cannot assert lack of access as evidence of lack of copying (which is what the clean room setup is all about), but you can still assert that either the copied code falls on the idea side of the idea-expression distinction, or that it is not a derived work (in the legal, not mathematical, sense) for the purposes of copyright law for some other reason. <p>The point of the clean room process is that the only thing you need to look at to confirm that the second team did not copy the original code is the specification produced by the first team, which makes it tractable to confirm that the second team's output is not a derived work by virtue of no copying being possible. <p>But that's not the only way to avoid infringing - it's just a well-understood and low-risk way to do so. Fri, 03 Oct 2025 17:43:51 +0000 How to check copyright? https://lwn.net/Articles/1040786/ https://lwn.net/Articles/1040786/ ballombe <div class="FormattedComment"> <span class="QuotedText">&gt; Why are you so sure?</span><br> <p> Clean room reverse engineering requires that there two separate, non-interacting, teams, one having access to the original code and writing its specification and a second team that never access the original code and is only relying on the specification to write the new program.<br> <p> Since by hypothesis the LLM had access to all the code on github, it cannot be used to write the new program.<br> <p> Remember when some Windows code was leaked, WINE developers were advised not to look at it to avoid being "tainted". <br> </div> Fri, 03 Oct 2025 17:39:35 +0000 How to check copyright? https://lwn.net/Articles/1040780/ https://lwn.net/Articles/1040780/ Wol <div class="FormattedComment"> And another thing - how much copyright violation is being blamed on the LLM, when the query being *sent* to the LLM itself is a pretty blatant copyright violation? At which point we're seriously into "unclean hands", and if the querier is not the copyright holder, they could easily find themselves named as a co-defendant (quite likely the more culpable defendant!) even if they're not the deeper pocket.<br> <p> If I had an LLM and found myself sued like that, I'd certainly want to drag the querier into it ...<br> <p> Cheers,<br> Wol<br> </div> Fri, 03 Oct 2025 16:47:19 +0000 How to check copyright? https://lwn.net/Articles/1040776/ https://lwn.net/Articles/1040776/ farnz There's a critical piece of data missing - what proportion of human-written code is strikingly similar to existing open-source implementations? <p>We know that humans accidentally and unknowingly infringe, too. Why can't we reuse the existing lawyer-approved solution to that problem for LLM output? Fri, 03 Oct 2025 15:25:05 +0000 How to check copyright? https://lwn.net/Articles/1040775/ https://lwn.net/Articles/1040775/ stefanha <div class="FormattedComment"> I am not claiming that all AI output is covered by the copyright of its training data. It seems reasonable that generated output is treated in the same way as when humans who have been exposed to copyrighted content create something.<br> <p> In the original comment I linked to a paper about extracting copyrighted content from LLMs. A web search brings up a bunch more in this field that I haven't read. Here is one explicitly about generated code (<a href="https://arxiv.org/html/2408.02487v3">https://arxiv.org/html/2408.02487v3</a>) that says "we evaluate 14 popular LLMs, finding that even top-performing LLMs produce a non-negligible proportion (0.88% to 2.01%) of code strikingly similar to existing open-source implementations".<br> <p> I think AI policies are getting ahead of themselves when they assume that a contributor can vouch for license compliance. There needs to be some kind of lawyer-approved solution to this so that the open source community is protected from a copyright mess.<br> </div> Fri, 03 Oct 2025 15:12:58 +0000 How to check copyright? https://lwn.net/Articles/1040773/ https://lwn.net/Articles/1040773/ stefanha <div class="FormattedComment"> <span class="QuotedText">&gt; There's at least two cases where "knowing whether the LLM output is under copyright or not" is completely irrelevant:</span><br> <p> I agree. I'm curious if anyone has solutions when copyright does come into play. It seems like a major use case that needs to be addressed.<br> </div> Fri, 03 Oct 2025 14:53:56 +0000 How to check copyright? https://lwn.net/Articles/1040690/ https://lwn.net/Articles/1040690/ alex <div class="FormattedComment"> I went through this many moons ago when one of the start-ups I worked at was working on an emulation layer. The lawyer made a distinction between "retained knowledge" (i.e. what was in our heads) and copying verbatim from either the files or notes. I had to hand in all my notebooks when I left the company but assuming no reference I could implement something the roughly the same way I had before. There is a lot of code which isn't copyrightable because it is either the only way to it or its "obvious".<br> <p> Patents where a separate legal rabbit hole.<br> </div> Fri, 03 Oct 2025 13:20:11 +0000 How to check copyright? https://lwn.net/Articles/1040688/ https://lwn.net/Articles/1040688/ stefanha <div class="FormattedComment"> <span class="QuotedText">&gt; every Red Hat employee is now required to state whatever tech IBM wants a slice of is something terrific you should be enthusiastic about</span><br> <p> I felt bemused reading this. Here we are in a comment thread that I, a Red Hat employee, started about issues with the policy proposal. Many of the people raising questions on the Fedora website are also Red Hat employees.<br> <p> It's normal for discussions to happen in public in the community. Red Hatters can and will disagree with each other.<br> </div> Fri, 03 Oct 2025 12:56:08 +0000 How to check copyright? https://lwn.net/Articles/1040686/ https://lwn.net/Articles/1040686/ Wol <div class="FormattedComment"> I can't point you to the law(s) themselves, but the European position - IN LAW - is that there is no difference between an AI reading and learning, and a person reading and learning.<br> <p> So I guess (and this is not clear) that there is no difference between an AI regurgitating what it's learnt, and a person regurgitating what it's learnt.<br> <p> So it basically comes down to the question "how close is the output to the input, and was the output obvious and not worthy of copyright protection?"<br> <p> Given the tendency of AI to hallucinate, I guess the output of an AI is LESS likely to violate copyright than that of a human. Of course, the corollary becomes the output of a human is more valuable :-)<br> <p> Cheers,<br> Wol<br> </div> Fri, 03 Oct 2025 11:16:56 +0000 How to check copyright? https://lwn.net/Articles/1040673/ https://lwn.net/Articles/1040673/ farnz Clean-room reverse-engineering isn't part of the codified side of copyright law; rather, it's a process that the courts recognise as guaranteeing that the work produced in the clean room cannot be a derived work of the original. <p>To be a derived work, there must be some copying of the original, intended or accidental. The clean-room process guarantees that the people in the clean-room cannot copy the original, and therefore, if they do come up with something that appears to be a copy of the original, it's not a derived work. <p>You can, of course, do reverse-engineering and reimplementation without a clean-room setup; it's just that you then have to show that each piece that's alleged to be a literal copy of the original falls on the right side of the idea-expression distinction to not be a derived work, instead of being able to show that no copying took place. Fri, 03 Oct 2025 09:40:00 +0000 How to check copyright? https://lwn.net/Articles/1040679/ https://lwn.net/Articles/1040679/ nim-nim <div class="FormattedComment"> Anyway, if anyone had any doubt about where the sudden urge to be enthusiastic about AI is coming from, here is a confirmation<br> <p> <a href="https://blogs.gnome.org/chergert/2025/10/03/mi2-glib/">https://blogs.gnome.org/chergert/2025/10/03/mi2-glib/</a><br> </div> Fri, 03 Oct 2025 09:38:50 +0000 Clean-room reverse engineering https://lwn.net/Articles/1040670/ https://lwn.net/Articles/1040670/ rschroev <div class="FormattedComment"> Clean-room reverse engineering is a whole different topic though, isn't it? It's what Compaq did back in the 80's to reverse engineer the IBM PC's BIOS, enabling them to make compatible machines in a legal way. Notably the people who studied IBM's BIOS and the ones who implemented Compaq's new one were different teams, to avoid any copyright issues. <br> <p> That's a whole different situation than either people or LLMs reading code and later using their knowledge to write new code.<br> </div> Fri, 03 Oct 2025 08:07:57 +0000 How to check copyright? https://lwn.net/Articles/1040666/ https://lwn.net/Articles/1040666/ mb <div class="FormattedComment"> <span class="QuotedText">&gt; Only stuff created by a human is eligible for copyright protection.</span><br> <p> That is a completely different topic, though.<br> This is about *re*-producing existing actually copyrighted content.<br> </div> Fri, 03 Oct 2025 07:01:51 +0000 How to check copyright? https://lwn.net/Articles/1040648/ https://lwn.net/Articles/1040648/ pizza <div class="FormattedComment"> <span class="QuotedText">&gt; Can you back this up with some actual legal text or descriptions from lawyers?</span><br> <p> Only stuff created by a human is eligible for copyright protection.<br> <p> See <a href="https://www.copyright.gov/comp3/chap300/ch300-copyrightable-authorship.pdf">https://www.copyright.gov/comp3/chap300/ch300-copyrightab...</a> section 307.<br> <p> Doesn't get any simpler than that.<br> <p> </div> Fri, 03 Oct 2025 00:16:09 +0000 How to check copyright? https://lwn.net/Articles/1040644/ https://lwn.net/Articles/1040644/ mb <div class="FormattedComment"> <span class="QuotedText">&gt;Legally, there's a huge distinction between the two.</span><br> <p> Interesting.<br> Can you back this up with some actual legal text or descriptions from lawyers?<br> I'd really be interested in learning what lawyers think the differences are.<br> </div> Thu, 02 Oct 2025 21:17:10 +0000 How to check copyright? https://lwn.net/Articles/1040643/ https://lwn.net/Articles/1040643/ pizza <div class="FormattedComment"> <span class="QuotedText">&gt; What is the fundamental difference between</span><br> <span class="QuotedText">&gt; a) human brains</span><br> <span class="QuotedText">&gt; b) LLMs processing</span><br> <p> Legally, there's a huge distinction between the two.<br> <p> And please keep in mind that "legally" is rarely satisfied with "technically" arguments.<br> <p> <p> </div> Thu, 02 Oct 2025 21:07:56 +0000 How to check copyright? https://lwn.net/Articles/1040637/ https://lwn.net/Articles/1040637/ mb <div class="FormattedComment"> Why are you so sure?<br> <p> What is the fundamental difference between<br> <p> a) human brains processing code into documentation and then into code<br> and <br> b) LLMs processing code into very abstract and compressed intermediate representations and then into code?<br> <p> LLM models would probably contain *less* information about the original code than a documentation.<br> </div> Thu, 02 Oct 2025 20:05:05 +0000 How to check copyright? https://lwn.net/Articles/1040635/ https://lwn.net/Articles/1040635/ ballombe <div class="FormattedComment"> Clean-room reverse-engineering is recognized and codified by copyright law, but LLM certainly do not do clean room reverse engineering.<br> </div> Thu, 02 Oct 2025 19:55:17 +0000 How to check copyright? https://lwn.net/Articles/1040626/ https://lwn.net/Articles/1040626/ Wol <div class="FormattedComment"> <span class="QuotedText">&gt; The trouble is that your argument depends on the LLM's output being a derived work of its training data; this is not necessarily true, even if you can demonstrate that the training data is present in some form inside the LLM's weights (not least because literal copying is not necessarily a derived work).</span><br> <p> It also "conveniently forgets" that any developer worth their salt is exposed to a lot of code for which they do not hold the copyright, and may not even be aware of the fact that they are recalling verbatim chunks of code they memorised at Uni / another place of work / a friend showed it to them.<br> <p> So all this complaining about AI-generated code could also be applied pretty much the same to developer-generated code, it's just that we don't think it's a problem if it's a developer, some people think it is if it's an AI.<br> <p> Personally, I'd be quite to happy to ingest AI-generated code into my brain, and then regurgitate the gist of it (suitably modified for corporate guidelines/whatever). By the time you've managed to explain in excruciating detail to the AI what you want, it's probably better to give it a simple explanation and rewrite the result.<br> <p> Okay, that end result may not be "clean room" copyright compliant, but given the propensity for developers to remember code fragments, I expect very little code is.<br> <p> We have a problem with musicians suing each other for copying fragments of songs (which the "copier" was probably unaware of - which the copyright *holder* probably copied as well without being aware of it!!!), how can we keep that out of computer programming? We can't, and that's assuming AI had no hand in it!<br> <p> Cheers,<br> Wol<br> </div> Thu, 02 Oct 2025 17:16:09 +0000 How to check copyright? https://lwn.net/Articles/1040613/ https://lwn.net/Articles/1040613/ farnz You continue to ignore the massive difference between "non-trivial" and "a derived work of the training data, protected by copyright". That's deeply critical to this conversation, and unless you can show that an AI's output is inherently a derived work, you're asking us to accept a tautology. <p>Fundamentally, and knowing how transformers work, I do not accept the claim that an AI's output is inherently a derived work of the training data. It is definitely (and demonstrably) possible to get an AI to output things that are derived works of the training data, with the right prompts, but it is also entirely possible to get an AI to produce output that is not derived from the training data for the purposes of copyright law. <p>It is also possible to get humans to produce outputs that are derived works of <em>their</em> training data, but that doesn't imply that all works produced by humans are derived works of their training data, for the purposes of copyright law. Thu, 02 Oct 2025 15:34:46 +0000 How to check copyright? https://lwn.net/Articles/1040611/ https://lwn.net/Articles/1040611/ io-cat <div class="FormattedComment"> I see. The intent of that part of my comment does not contradict your sentiment.<br> <p> If it is too hard or impossible to guarantee that the license of LLM output is compliant with the rules - it doesn’t make sense to me to encourage or perhaps even allow usage of such tools until this is ironed out by their proponents.<br> <p> I’m focusing my opinion here specifically on the licensing question, aside from other potentially problematic things.<br> </div> Thu, 02 Oct 2025 15:21:21 +0000 How to check copyright? https://lwn.net/Articles/1040607/ https://lwn.net/Articles/1040607/ farnz There's at least two cases where "knowing whether the LLM output is under copyright or not" is completely irrelevant: <ol> <li>You don't know how to solve the problem; you ask an LLM to explain how to solve this problem, and then you manually write the code yourself, based on the LLM's explanation. This is an existing problem - it's the same as reading a book or a paper that explains how to solve this problem - and the answer is to "assume it's covered by copyright, but write your own solution, don't just copy blindly. That applies whether "the text" is a book, a paper, or some LLM generated work. <li>The parts of the contribution copied from the LLM's output is one that you've inspected, and confirmed would be covered by an exception to copyright law even if the work they are taken from is under copyright. In this case, the copyright status of the LLM's output is irrelevant, since the part you're using is one you can use even if it's under copyright. Again, this is a pre-existing problem; if I read (say) IEEE 1003.1-2024 (or one of the many things that's copied text from it verbatim, like <a href="https://man7.org/linux/man-pages/man2/rename.2.html">this Linux man page</a>), and copy part of it into my contribution, that's copying from a document under copyright and licensed under restrictive terms, but because it doesn't rise to the point where my copying creates a derived work, copyright status is irrelevant. </ol> Thu, 02 Oct 2025 15:18:15 +0000 Productivity https://lwn.net/Articles/1040609/ https://lwn.net/Articles/1040609/ nim-nim <div class="FormattedComment"> But KISS simplicity is the hardest thing to achieve. That’s exactly what an automaton is bad at, automatons are good at piling up slop till it sort of works, not at reducing things to their simplest core.<br> <p> Reducing things requires opinions on how things should end up. It requires the ability to evaluate features and the willingness to cull parts that add to much complexity for too little benefits. Outside the distro world, we see house of cards of hundreds of interlinked software components that no one know how to evaluate for robustness, security and maintainability. They are the direct result of automation with no human in the loop to put a stop when it gets out of hand.<br> </div> Thu, 02 Oct 2025 15:10:41 +0000 How to check copyright? https://lwn.net/Articles/1040608/ https://lwn.net/Articles/1040608/ stefanha <div class="FormattedComment"> <span class="QuotedText">&gt; And the reality today is that you can take AI-generated output, and confirm by inspection that it's not possible for it to be a derived work, and hence that licensing is irrelevant. </span><br> <p> I agree.<br> <p> It is common practice to use AI to generate non-trivial output though. If the intent of the policy is to allow trivial AI-generated contributions, then it should mention this to prevent legal issues.<br> </div> Thu, 02 Oct 2025 15:09:28 +0000 How to check copyright? https://lwn.net/Articles/1040605/ https://lwn.net/Articles/1040605/ nim-nim <div class="FormattedComment"> I *think* I reacted strongly to the first part of your post. I just hate the “It’s too hard, let’s pretend the problem does not exist” of hype advocates.<br> <p> I completely agree the “enthusiasm” and gushing about the greatness of IA has no place in a community policy. That’s pure unadulterated corporate brown-nosing. Good community policies should be dry and to the point, help contributors contribute, not feel like an advert for something else.<br> </div> Thu, 02 Oct 2025 14:57:18 +0000 How to check copyright? https://lwn.net/Articles/1040604/ https://lwn.net/Articles/1040604/ stefanha <div class="FormattedComment"> <span class="QuotedText">&gt; If you limit yourself to the subsets of LLM output that are not derived works (e.g. because they're covered by the equivalents of the scènes à faire doctrine in US copyright law or other parts of the idea-expression distinction), then you can comply with your legal obligations. You are forced to do the work to confirm that the LLM output you're using is not, legally speaking, a derived work, but then it's safe to use. </span><br> <p> I started this thread by asking:<br> <p> <span class="QuotedText">&gt; But how is a contributor supposed to know whether AI-generated output is covered by copyright and under a compatible license?</span><br> <p> And here you are saying that if you know it's not a derived work, then it's safe to use. I agree with you.<br> <p> The problem is that we still have no practical way of knowing whether the LLM output is under copyright or not.<br> </div> Thu, 02 Oct 2025 14:54:04 +0000 How to check copyright? https://lwn.net/Articles/1040513/ https://lwn.net/Articles/1040513/ zdzichu <div class="FormattedComment"> Fortunately, Red Hat employes are minority among Fedora contributors (around 30%). Rest of us are free to totally ignore what people at IBM are enthusiastic about.<br> </div> Thu, 02 Oct 2025 13:58:56 +0000 How to check copyright? https://lwn.net/Articles/1040512/ https://lwn.net/Articles/1040512/ io-cat <div class="FormattedComment"> I think we are in agreement that this should not be the community problem.<br> <p> Could you clarify how did you perceive my comment? I’m not sure how does your response, especially given the tone, follow from it :)<br> </div> Thu, 02 Oct 2025 13:53:55 +0000 How to check copyright? https://lwn.net/Articles/1040509/ https://lwn.net/Articles/1040509/ farnz The trouble is that your argument depends on the LLM's output being a derived work of its training data; this is not necessarily true, even if you can demonstrate that the training data is present in some form inside the LLM's weights (not least because literal copying is not necessarily a derived work). <p>If you limit yourself to the subsets of LLM output that are not derived works (e.g. because they're covered by the equivalents of <a href="https://en.wikipedia.org/wiki/Idea%E2%80%93expression_distinction#Sc%C3%A8nes_%C3%A0_faire">the scènes à faire doctrine in US copyright law</a> or other parts of the idea-expression distinction), then you can comply with your legal obligations. You are forced to do the work to confirm that the LLM output you're using is not, legally speaking, a derived work, but then it's safe to use. Thu, 02 Oct 2025 13:27:05 +0000 How to check copyright? https://lwn.net/Articles/1040504/ https://lwn.net/Articles/1040504/ nim-nim <div class="FormattedComment"> <span class="QuotedText">&gt; I understand your position, but the primary distinction in my view is that nowadays there is no reliable way to get licensing information from an LLM in most cases.</span><br> <p> Well that’s a problem for LLM advocates not for Fedora. It’s not for the community to solve problems in products pushed by wealthy corporations. The sad part of IBM buying Red Hat is that Fedora is now part of the corporate hype cycle, and every Red Hat employee is now required to state whatever tech IBM wants a slice of is something terrific you should be enthusiastic about. Red Hat consistently outperformed the IBMs of the day because it delivered solid boring tech that solved actual problems not over-hyped vaporware.<br> <p> LLM tech has some core unsolved problems just like cryptocurrencies (the previous hype cycle) had core unsolved problems, sad to be you if you were foolish enough to put your savings there listening to the corporate hype, the corporate hype does not care about you nor about communities.<br> <p> </div> Thu, 02 Oct 2025 13:21:53 +0000 How to check copyright? https://lwn.net/Articles/1040507/ https://lwn.net/Articles/1040507/ farnz Right, but policy should not be written to just cover today. It also needs to cover tomorrow's evolutions of the technology, too. And that means both hypotheticals, like an LLM that could correctly attribute derived works, or a contributor who uses something that's AI-based, but is careful to make sure that the stuff they submit to Fedora is not a derived work in the copyright sense (and hence licensing is irrelevant). <p>And the reality today is that you can take AI-generated output, and confirm by inspection that it's not possible for it to be a derived work, and hence that licensing is irrelevant. Thu, 02 Oct 2025 13:18:51 +0000 How to check copyright? https://lwn.net/Articles/1040508/ https://lwn.net/Articles/1040508/ farnz I missed that, skipping over what I thought was "corporate boilerplate" to the bolded sentence afterwards: "The contributor is always the author and is fully accountable for their contributions." Thu, 02 Oct 2025 13:17:35 +0000