LWN: Comments on "Large language models for patch review" https://lwn.net/Articles/1041694/ This is a special feed containing comments posted to the individual LWN article titled "Large language models for patch review". en-us Wed, 29 Oct 2025 14:03:40 +0000 Wed, 29 Oct 2025 14:03:40 +0000 https://www.rssboard.org/rss-specification lwn@lwn.net Positive experiences on other projects (curl) https://lwn.net/Articles/1043192/ https://lwn.net/Articles/1043192/ Karellen <div class="FormattedComment"> Ah, sorry for misreading your meaning, my mistake!<br> </div> Fri, 24 Oct 2025 10:57:39 +0000 Positive experiences on other projects (curl) https://lwn.net/Articles/1043173/ https://lwn.net/Articles/1043173/ taladar <div class="FormattedComment"> That is exactly what I meant.<br> </div> Fri, 24 Oct 2025 08:28:26 +0000 Positive experiences on other projects (curl) https://lwn.net/Articles/1043149/ https://lwn.net/Articles/1043149/ edgewood <div class="FormattedComment"> I believe that the luxury the OP was referring to is "most of those (bugs) were in implementations of protocols with decades old specs, so specs that were definitely part of the training data of the LLM".<br> <p> In other words, curl might be a best-case scenario for LLM bug finding, due it being code that implements well known public specs. Other codebases, which are unlikely to implement well known public specs that are part of the LLM training data, may not benefit as much.<br> </div> Fri, 24 Oct 2025 03:53:22 +0000 Positive experiences on other projects (curl) https://lwn.net/Articles/1043124/ https://lwn.net/Articles/1043124/ Karellen <div class="FormattedComment"> You expect most projects would get a lower rate of errors (per-kLOC?) than curl?<br> <p> I'm not sure about that. From what I've read, the curl team (and Daniel Stenberg in particular) pays a lot of attention to code quality. Notably, they try to make a habit that when one bug is found, to search the codebase for similar bugs and eliminate them at the same time. More details at<br> <p> <a href="https://daniel.haxx.se/blog/2025/04/07/writing-c-for-curl/">https://daniel.haxx.se/blog/2025/04/07/writing-c-for-curl/</a><br> <p> Daniel also did a presentation on the subject at FOSDEM 2025 - Tightening every bolt: <a href="https://www.youtube.com/watch?v=Yr5fPxZvhOw">https://www.youtube.com/watch?v=Yr5fPxZvhOw</a> (25:44)<br> </div> Thu, 23 Oct 2025 21:47:56 +0000 Positive experiences on other projects (curl) https://lwn.net/Articles/1042881/ https://lwn.net/Articles/1042881/ taladar <div class="FormattedComment"> On the other hand Daniel also mentions that the curl code base is several 100k lines of code and after decades of not applying a tool like that he only got a few hundred suggestions from the AI tool, most of them minor things or things they did not want to fix. Only a few were actual bugs and most of those were in implementations of protocols with decades old specs, so specs that were definitely part of the training data of the LLM.<br> <p> Most code bases do not have that luxury so I would expect the useful results to be even lower in number.<br> </div> Wed, 22 Oct 2025 08:08:20 +0000 Positive experiences on other projects (curl) https://lwn.net/Articles/1042863/ https://lwn.net/Articles/1042863/ lmb <div class="FormattedComment"> Daniel, who is famous for complaining (rightfully!) about AI slop reports and issues raised, recently had good things to say about AI code analysis, which I think is worthwhile to review in this context: <a href="https://daniel.haxx.se/blog/2025/10/10/a-new-breed-of-analyzers/">https://daniel.haxx.se/blog/2025/10/10/a-new-breed-of-ana...</a> (referring to <a href="https://joshua.hu/llm-engineer-review-sast-security-ai-tools-pentesters">https://joshua.hu/llm-engineer-review-sast-security-ai-to...</a>)<br> <p> That's not exclusively LLM use, but still, clearly they were used in generating the reports. To surprising effect.<br> <p> In conversation (alas not public) with Paulus Schoutsen (the Home Assistant person), he brought up an interesting point on the use of AI/LLMs for code review that they've seen in the HA world: experienced people benefit, whereas more junior people suffer.<br> <p> Because the former can almost immediately dismiss the LLM going off the rails (as they, as we all know, are prone to do), and focus on what makes sense as added value, and this actually does accelerate them. A more junior person has to go through the review and understand, and now suddenly has way more feedback to deal with than a human reviewer previously would have provided.<br> <p> It probably makes sense to review the patches at least quickly first before looking at the LLM review to avoid being too influenced by their report; but over all, I'd suspect this can be a valuable addition.<br> <p> </div> Tue, 21 Oct 2025 18:19:24 +0000 What about false negatives? https://lwn.net/Articles/1042857/ https://lwn.net/Articles/1042857/ alx.manpages <div class="FormattedComment"> Hi Sasha!<br> <p> Thanks for the clarification. I'll remove that link.<br> <p> On the other hand, if this specific case wasn't such a case, I'm still worried that it will eventually happen somewhere. I still believe that reviewing output from AI is way more difficult than human output, and comparable to reviewing output from malicious humans, which is something I don't want to have to review ever. If others feel confident enough to be able to review AI output, they're welcome to accept it in their projects, but I don't feel like taking the risk.<br> <p> Cheers,<br> Alex<br> </div> Tue, 21 Oct 2025 16:46:34 +0000 Financial AI bubble https://lwn.net/Articles/1042771/ https://lwn.net/Articles/1042771/ paulj <div class="FormattedComment"> Oracles' build-out seems to be debt leveraged. E.g., this is just 1 banking deal for 1 set of DCs: <a href="https://cryptorank.io/news/feed/17010-banks-bet-oracles-cloud-38b-data-center">https://cryptorank.io/news/feed/17010-banks-bet-oracles-c...</a>. And just that $38B deal represents a sizeable portion of their 2025 revenue ($57.4B). And if I read that right, there's another $23B finance deal there too for another DC build out. <br> <p> Wow. All for an industry with little revenue, an industry with massive baked-in costs which are not going to reduce. <br> <p> The crash will be hard.<br> </div> Tue, 21 Oct 2025 11:00:48 +0000 Financial AI bubble https://lwn.net/Articles/1042769/ https://lwn.net/Articles/1042769/ paulj <div class="FormattedComment"> My assumption is the likes of NVidia will be hurt bad, but survive. The world will still need GPUs and GPGPUs after all.<br> <p> Those who bet heavily on leveraged investment for building CapEx-intensive DCs will crash out. Even if you switch off the electricity to the DC you just built, you still have large loan repayments falling due regularly. There are a couple of companies in this position. Some may be large (I'm unaware of how Oracle is financing its huge DC build-out, if it's very leveraged they may end up in trouble).<br> </div> Tue, 21 Oct 2025 10:22:19 +0000 Financial AI bubble https://lwn.net/Articles/1042766/ https://lwn.net/Articles/1042766/ farnz You're also assuming that NVidia has increased expenditure to match AI incomes. <p>If NVidia's committed spend requires no more than 20% of their revenue, and they're getting at least 20% from things other than AI, an AI crash "merely" destroys their share price. If they've committed to long-term projects that need 30% of their revenue, and AI is 75% of their revenue, an AI crash can take the company out. Tue, 21 Oct 2025 09:52:31 +0000 What about false negatives? https://lwn.net/Articles/1042757/ https://lwn.net/Articles/1042757/ comex <div class="FormattedComment"> That is not true. For any given task, hallucination rates have decreased exponentially over time, as shown by benchmarks and confirmed by my own limited experience with chatbots. But the tasks we expect LLMs to complete have likewise increased in difficulty, shooting the hallucination rate back up.<br> <p> To be fair, a large amount of this improvement has simply come from newer models being able to answer harder questions. LLMs have always been much more likely to hallucinate when they don’t know the answer to a question, so knowing more answers reduces hallucinations ‘for free’. What’s harder for models is admitting when they don’t know or can’t solve something. They have improved on this front too, but the improvement has been slower. And if you’re waiting for a breakthrough that brings hallucinations to human-like low rates, that has indeed yet to be found.<br> <p> We will see how things develop moving forward.<br> </div> Tue, 21 Oct 2025 05:38:11 +0000 Financial AI bubble https://lwn.net/Articles/1042736/ https://lwn.net/Articles/1042736/ pizza <div class="FormattedComment"> <span class="QuotedText">&gt; There are absolutely is a viable market segment: selling data (presumably aggregated data) that users share with AI companies to advertisers.</span><br> <p> "viable" meaning "can turn a profit"<br> <p> Which, AFAICT, has yet to be demonstrated.<br> <p> <p> </div> Mon, 20 Oct 2025 21:22:40 +0000 Financial AI bubble https://lwn.net/Articles/1042728/ https://lwn.net/Articles/1042728/ khim <p>There are absolutely is a viable market segment: selling data (presumably aggregated data) that users share with AI companies to advertisers.</p> <p>It's more than enough to support something like GPT-3. And GPT-3 is enough to attract a lot of people who would be willing to share that data.</p> <p>Whether such AI would be sophisticated enough to review kernel patches is an interesting question, of course.</p> Mon, 20 Oct 2025 20:38:54 +0000 Financial AI bubble https://lwn.net/Articles/1042726/ https://lwn.net/Articles/1042726/ Wol <div class="FormattedComment"> <span class="QuotedText">&gt; That, essentially, means that we don't know which AI provider would survive, but at least one of them would.</span><br> <p> You're assuming there is a viable market segment to be leader of ...<br> <p> Cheers,<br> Wol<br> </div> Mon, 20 Oct 2025 20:33:06 +0000 Financial AI bubble https://lwn.net/Articles/1042724/ https://lwn.net/Articles/1042724/ Wol <div class="FormattedComment"> <span class="QuotedText">&gt; As long as idiots like Softbank would take loans to finance the whole thing exists it would move, when bubble would burst… NVIDIA still have gaming GPUs to sell.</span><br> <p> That fact forgets Dicken's dictum - "Annual income £20, annual expenditure £20 0s 6d, penury; Annual income £20, annual expenditure £19 19s 6d, happiness". And as a lot of people personally are finding out when they are laid off, just because your income may be slashed, your bills aren't.<br> <p> If Nvidia are making most of their money from AI GPUs, and that market evaporates overnight, can they cut their expenditure fast enough to avoid it dragging them under?<br> <p> Cheers,<br> Wol<br> </div> Mon, 20 Oct 2025 20:28:25 +0000 Financial AI bubble https://lwn.net/Articles/1042626/ https://lwn.net/Articles/1042626/ geert <div class="FormattedComment"> <span class="QuotedText">&gt; Are you maybe too young to remember Google+? :)</span><br> <p> You mean Orkut? Fortunately I never got a T-shirt of that ;-)<br> <p> It's not just Google: the T-shirt-with-URL I once got from Intel long survived the actual website and domain...<br> Oh, apparently <a href="https://lesswatts.org/">https://lesswatts.org/</a> has been rescued from the cybersquatters...<br> </div> Mon, 20 Oct 2025 14:10:59 +0000 Financial AI bubble https://lwn.net/Articles/1042619/ https://lwn.net/Articles/1042619/ khim <font class="QuotedText">&gt; Question is how hard that will hurt those who have invested in it.</font> <p>As usual: idiots who invested in it would be burnt badly, companies involved wouldn't be hurt too much. AI is a classic ponzi scheme, these days: NVIDIA invests into CoreWeave, and OpenAI, then these buy GPUs from NVIDIA. Means that they can show anything they want in books in a short term by playing paper games.</p> <p>As long as idiots like Softbank would take loans to finance the whole thing exists it would move, when bubble would burst… NVIDIA still have gaming GPUs to sell.</p> <p>Dot com bubble crushed Sun not because it was selling servers, but because it had nothing else to sell. The companies that could have invested into gaming GPUs and left NVIDIA in Sun position, AMD and Intel, invested in AI, instead, thus they wouldn't be able to kill NVIDIA when they NVIDIA would be vulnerable.</p> Mon, 20 Oct 2025 13:26:12 +0000 Financial AI bubble https://lwn.net/Articles/1042618/ https://lwn.net/Articles/1042618/ paulj <div class="FormattedComment"> Not impossible. NVidia's future success is heavily invested in AI now. It's promising the equivalent of about 76% of its annual revenue for 2025 to OpenAI - a company with $12 billion in revenue. Oracle looks to be heavily exposed too to OpenAI.<br> <p> OpenAI certainly could crash hard. Question is how hard that will hurt those who have invested in it.<br> </div> Mon, 20 Oct 2025 13:14:09 +0000 Financial AI bubble https://lwn.net/Articles/1042617/ https://lwn.net/Articles/1042617/ khim <font class="QuotedText">&gt; I'm less familiar with the Microsoft world, I'll let someone else provide examples.</font> <p>How is fate of <a href="https://en.wikipedia.org/wiki/Windows_Mobile">Windows Mobile</a> any different from ChromsOS?</p> <p>But note that in all these examples products had one “fatal flaw” — they weren't even close to be market leaders in a particular segment.</p> <p>That, essentially, means that we don't know which AI provider would survive, but at least one of them would.</p> Mon, 20 Oct 2025 13:09:51 +0000 Financial AI bubble https://lwn.net/Articles/1042616/ https://lwn.net/Articles/1042616/ egb <div class="FormattedComment"> Long shot answer? Nvidia. I want to see how hard they'll $EXPLETIVE themselves when AI-related hardware sales plummet.<br> </div> Mon, 20 Oct 2025 13:04:02 +0000 What about false negatives? https://lwn.net/Articles/1042604/ https://lwn.net/Articles/1042604/ taladar <div class="FormattedComment"> AI tools have literally been stagnant since GPT-3 was first introduced on the issue of hallucinations.<br> </div> Mon, 20 Oct 2025 08:47:27 +0000 What about false negatives? https://lwn.net/Articles/1042603/ https://lwn.net/Articles/1042603/ taladar <div class="FormattedComment"> But the compiler or other classic analysis tool always gives you the false positives for the same type of edge case, the AI tool is completely random in when it hallucinates.<br> </div> Mon, 20 Oct 2025 08:45:59 +0000 What about false negatives? https://lwn.net/Articles/1042596/ https://lwn.net/Articles/1042596/ drago01 <div class="FormattedComment"> <span class="QuotedText">&gt; Yes, that's my assumption.</span><br> <p> Others already answered that below. But that makes zero sense.<br> <p> You even assume that there is no innovation in that area, tools do improve (actually quite fast due to healthy competition) - so even if the quality isn't good enough today, tomorrow's AI tools may be much better. You are trying to outright ban a technology.<br> </div> Mon, 20 Oct 2025 04:14:46 +0000 What about false negatives? https://lwn.net/Articles/1042589/ https://lwn.net/Articles/1042589/ sashal <div class="FormattedComment"> <span class="QuotedText">&gt; See also: &lt;<a href="https://xcancel.com/spendergrsec/status/1958264076162998771">https://xcancel.com/spendergrsec/status/1958264076162998771</a>&gt;</span><br> <p> To clarify:<br> <p> 1. The code in the patch wasn't AI generated.<br> 2. The issue described in that post is an issue as much as "root can shoot himself in the foot!".<br> <p> You are parroting a toxic person's speculative and misinformed post made to support his (again, toxic) businesses practices in an attempt to convince others that AI is somehow bad.<br> <p> Is AI the problem here, or is it the behavior and conduct that you are modelling?<br> </div> Sun, 19 Oct 2025 19:07:32 +0000 Git forges https://lwn.net/Articles/1042562/ https://lwn.net/Articles/1042562/ Cyberax <div class="FormattedComment"> Yeah. And why not set an alias for it, so you don't have to type it every time? Then just remember to use the regular "commit" when you are preparing a PR.<br> <p> Oh wait....<br> </div> Sun, 19 Oct 2025 08:14:00 +0000 What about false negatives? https://lwn.net/Articles/1042555/ https://lwn.net/Articles/1042555/ mb <div class="FormattedComment"> <span class="QuotedText">&gt;you don't know when AI lies to you that you must assume it always lies to you</span><br> <p> Therefore, you must assume any classic tool lies to you.<br> A compiler gives you false positive warnings so you can't know whether a warning is real or not.<br> </div> Sat, 18 Oct 2025 21:42:40 +0000 What about false negatives? https://lwn.net/Articles/1042554/ https://lwn.net/Articles/1042554/ alx.manpages <div class="FormattedComment"> <span class="QuotedText">&gt; You are moving the goal posts.</span><br> <p> I don't think so. I've consistently meant that it's because you don't know when AI lies to you that you must assume it always lies to you. If I know when you're going to lie to me, I know when to ignore you, and thus you're going to have a hard work lying to me.<br> <p> Maybe I wasn't explicit enough here (I tend to not repeat _everything_ every time I discuss a topic; I assume people can follow links, and also use common sense), but I've said this before in the mailing list where the policy is being worked on. I've also argued this point elsewhere when debating AI tools.<br> <p> And that was the rationale for the following paragraph in the policy, in the first place:<br> <p> + AI tools should be considered adversarial, as if they<br> + were a black box with Jia Tan inside them.<br> </div> Sat, 18 Oct 2025 21:34:00 +0000 What about false negatives? https://lwn.net/Articles/1042552/ https://lwn.net/Articles/1042552/ mb <div class="FormattedComment"> You are moving the goal posts.<br> </div> Sat, 18 Oct 2025 21:09:50 +0000 What about false negatives? https://lwn.net/Articles/1042551/ https://lwn.net/Articles/1042551/ alx.manpages <div class="FormattedComment"> There's a big difference between false positives or negatives in a compiler, which are deterministic. You can reproduce them, and know when the compiler has you covered or not. Also those false Ps and Ns almost always decrease, with regressions being rare (but again, when they exist, at least you can reproduce them and file a bug).<br> <p> With an AI tool, there are no rules. You may have the same exact chat with the tool twice, with different results.<br> <p> I'm amazed by how few people seem to value determinism in tools.<br> </div> Sat, 18 Oct 2025 21:06:18 +0000 What about false negatives? https://lwn.net/Articles/1042549/ https://lwn.net/Articles/1042549/ ojeda <div class="FormattedComment"> A reviewing tool being wrong sometimes (AI or not) doesn't mean a patch gets always automatically modified. Even if a patch gets modified in a wrong way at some points, it doesn't follow that on average it is all a net negative. What usually happens is that tools get disabled if they are not worth it, but you haven't shown that.<br> <p> And even if you are talking about second order effects like "it wasted time I could have used in a better reviewing tool", you would still need to show which ones are the better tools. And even if one tool is way better than another, it doesn't automatically follow that it isn't worth it to run both. And so on and so forth.<br> <p> So, no, your logic doesn't follow, sorry.<br> <p> And it is not just reviewing that your policy bans. The policy casts such a wide net (banning even tools "in the contributing process" that do not generate code) that people cannot even report bugs found with those tools when applied over the codebase. So, for instance, an external team that runs such tools for other projects will need to skip your repository, even if they manually curate the results and so on.<br> </div> Sat, 18 Oct 2025 21:01:42 +0000 What about false negatives? https://lwn.net/Articles/1042550/ https://lwn.net/Articles/1042550/ mb <div class="FormattedComment"> <span class="QuotedText">&gt;if it sometimes can be a lie, you need to consider it *always* as if it were a lie</span><br> <p> I think that this conclusion doesn't make any sense at all.<br> By the same reasoning you would have to make compiler warnings being 100% correct all the time or ban the use of the compiler that gives you false warnings (every compiler).<br> </div> Sat, 18 Oct 2025 20:03:46 +0000 What about false negatives? https://lwn.net/Articles/1042548/ https://lwn.net/Articles/1042548/ alx.manpages <div class="FormattedComment"> <span class="QuotedText">&gt; Those are not counterexamples -- you said "always", not "sometimes".</span><br> <p> A liar may sometimes tell the truth. The problem is that if it sometimes can be a lie, you need to consider it *always* as if it were a lie. The quality of a contribution doesn't exist in a vacuum, and the possibility of it being a hallucination is already lowering the quality, even if a given instance is actually good.<br> </div> Sat, 18 Oct 2025 19:45:47 +0000 What about false negatives? https://lwn.net/Articles/1042544/ https://lwn.net/Articles/1042544/ ojeda <div class="FormattedComment"> <span class="QuotedText">&gt; Here are a few counter-counterexamples</span><br> <p> Those are not counterexamples -- you said "always", not "sometimes".<br> <p> And even if you meant "most of the time", if there is a tool so bad that it is wrong often enough and, on top of that, manages to misdirect authors often enough, then it will simply stop being used sooner or later. That applies to all tools, not just AI-based ones. We could be talking about a bad compiler warning that gets disabled, for instance.<br> <p> <span class="QuotedText">&gt; Running an AI tool might lead a programmer to be more convinced that the patch is good, and thus less prone to running other tools, or asking other humans to review a patch. Those other tools or humans would probably do a better job.</span><br> <p> You could argue the same about running `checkpatch.pl`, or a static analyzer, or even warnings in your compiler, or even using a safer language...<br> <p> That doesn't mean every reviewing tool should be banned.<br> <p> <span class="QuotedText">&gt; *and* also asked all of the humans that could help</span><br> <p> That doesn't apply in the case I mentioned: someone reviewing their patch before submitting it.<br> <p> And your policy bans even that.<br> <p> <span class="QuotedText">&gt; But at that point, we're deep into diminishing returns.</span><br> <p> Not really. Tools are fairly different from one another, even in the "regular" set you mention, and LLMs are quite different from the "usual" tools. It doesn't even need to be about reviewing code.<br> </div> Sat, 18 Oct 2025 19:06:12 +0000 What about false negatives? https://lwn.net/Articles/1042545/ https://lwn.net/Articles/1042545/ alx.manpages <div class="FormattedComment"> <span class="QuotedText">&gt; &gt;A project can't force you to comply, but it's not forced either to accept the submissions.</span><br> <p> <span class="QuotedText">&gt; True.</span><br> <span class="QuotedText">&gt; And to actually reject such a submission, you need to run X, Y, and Z tests on *your* side to find out whether you have to reject it.</span><br> <p> One may fool a maintainer, by saying one hasn't used any AI for contributing. Just like one can dump AI slop directly as output from a chatbot and let the maintainer figure out if it's valid code.<br> Depending on how plausible that output is, one might fool more or less maintainers.<br> <p> See also: &lt;<a href="https://xcancel.com/spendergrsec/status/1958264076162998771">https://xcancel.com/spendergrsec/status/1958264076162998771</a>&gt;<br> <p> After all, this isn't much different from the contributor claiming to not have copied code violating a license. That's something I can't verify as a maintainer, and have to trust the contributor by its word.<br> <p> But one's reputation might be busted if it is eventually found out that one lied to the maintainers of a project.<br> <p> I don't mind too much if someone uses AI tools if they didn't know the guideline. That's something I'd just remind the contributor I don't want them to do. But if it knows and still does it, and I somehow find out, then it's busted.<br> <p> <span class="QuotedText">&gt; That would best be done in your CI.</span><br> <span class="QuotedText">&gt; Just saying that the developer has to run X, Y, and Z tests simply isn't enough.</span><br> <p> The good thing about tests is that, as you say, I can verify in CI. This means I'm not too worried if a contributor doesn't run them. I simply remind them that they can run them, but since I'm able to run them myself in my CI server, it's not enough to distrust a contributor.<br> </div> Sat, 18 Oct 2025 17:55:20 +0000 What about false negatives? https://lwn.net/Articles/1042543/ https://lwn.net/Articles/1042543/ alx.manpages <div class="FormattedComment"> <span class="QuotedText">&gt; It is a broken assumption, and a counterexample is trivial: someone runs a tool that finds an issue with a patch, and the author fixes it before submitting.</span><br> <p> I remain unconvinced. Here are a few counter-counterexamples:<br> <p> Someone runs an AI tool that finds a false positive. The AI fools the programmer to believe it is valid, and results in the introduction of a bug instead of a fix. AI tools can fool humans easier than regular tools.<br> <p> <span class="QuotedText">&gt; Of course, there are costs to running many tools, diminishing returns, etc. But that is a different discussion.</span><br> <p> I think it's part of the same discussion. Let's go bad to my first post: what about false negatives? Running an AI tool might lead a programmer to be more convinced that the patch is good, and thus less prone to running other tools, or asking other humans to review a patch. Those other tools or humans would probably do a better job.<br> <p> Should one run 10 regular tools vs 5 regular tools and an AI tool?<br> <p> It would only make sense to run the AI tool if you've already run *all* of the existing regular tools *and* also asked all of the humans that could help. But at that point, we're deep into diminishing returns.<br> <p> And there's still the possibility that the AI tool might fool you into breaking a good patch, breaking what those humans had reviewed.<br> </div> Sat, 18 Oct 2025 17:27:50 +0000 What about false negatives? https://lwn.net/Articles/1042542/ https://lwn.net/Articles/1042542/ mb <div class="FormattedComment"> <span class="QuotedText">&gt;A project can't force you to comply, but it's not forced either to accept the submissions.</span><br> <p> True.<br> And to actually reject such a submission, you need to run X, Y, and Z tests on *your* side to find out whether you have to reject it.<br> That would best be done in your CI.<br> Just saying that the developer has to run X, Y, and Z tests simply isn't enough.<br> </div> Sat, 18 Oct 2025 17:18:22 +0000 What about false negatives? https://lwn.net/Articles/1042539/ https://lwn.net/Articles/1042539/ ojeda <div class="FormattedComment"> <span class="QuotedText">&gt; Yes, that's my assumption.</span><br> <p> It is a broken assumption, and a counterexample is trivial: someone runs a tool that finds an issue with a patch, and the author fixes it before submitting.<br> <p> So, no, the end result will not "always have lower quality".<br> <p> And note how that applies regardless of whether the tool in question uses AI or not. It could be `checkpatch.pl` I was thinking about.<br> <p> In practice, applying more tools generally means better results, just like more reviewers and more review time helps. And someone's fancy AI tool can definitely find issues sometimes that other tools may not. Of course, there are costs to running many tools, diminishing returns, etc. But that is a different discussion.<br> </div> Sat, 18 Oct 2025 16:53:50 +0000 What about false negatives? https://lwn.net/Articles/1042538/ https://lwn.net/Articles/1042538/ alx.manpages <div class="FormattedComment"> There's no legal obligation to comply. It's a contribution guideline, not a license. I'm stating that I don't want to receive contributions of low quality, and I consider submissions created with any help of AI to be of low quality.<br> <p> It's the same as a guideline saying you must run X, Y, and Z tests before submitting. A project can't force you to comply, but it's not forced either to accept the submissions.<br> </div> Sat, 18 Oct 2025 16:17:53 +0000 What about false negatives? https://lwn.net/Articles/1042532/ https://lwn.net/Articles/1042532/ intelfx <div class="FormattedComment"> <span class="QuotedText">&gt; I don't mind what you do on your computer, as long as it doesn't _influence_ the quality of the contributions.</span><br> <p> All of this is none of your business.<br> <p> You can judge my contribution on its merits (or not at all). That's all you can do. What I do in my life is not your concern, even if it can hypothetically impact the quality of said contribution.<br> </div> Sat, 18 Oct 2025 11:27:18 +0000 Git forges https://lwn.net/Articles/1042531/ https://lwn.net/Articles/1042531/ mpg <div class="FormattedComment"> git commit --no-verify is your friend here.<br> </div> Sat, 18 Oct 2025 11:12:52 +0000