LWN: Comments on "The FSF considers large language models" https://lwn.net/Articles/1040888/ This is a special feed containing comments posted to the individual LWN article titled "The FSF considers large language models". en-us Sat, 25 Oct 2025 21:23:15 +0000 Sat, 25 Oct 2025 21:23:15 +0000 https://www.rssboard.org/rss-specification lwn@lwn.net Define “prompt” https://lwn.net/Articles/1043337/ https://lwn.net/Articles/1043337/ davidgerard <div class="FormattedComment"> you mean: "it can't be that stupid, you must be prompting it wrong"?<br> </div> Sat, 25 Oct 2025 19:28:13 +0000 Define “prompt” https://lwn.net/Articles/1043164/ https://lwn.net/Articles/1043164/ taladar <div class="FormattedComment"> More importantly it also depends on the exact model in use and at least for the hosted models there is no way to get exactly the same version as on some past request.<br> </div> Fri, 24 Oct 2025 07:50:30 +0000 Define “prompt” https://lwn.net/Articles/1042971/ https://lwn.net/Articles/1042971/ nye <div class="FormattedComment"> <span class="QuotedText">&gt; Treat it like an idiot that knows everything and understands nothing. Because that's what it is... The trick is to combine your understanding with its knowledge.</span><br> <p> I think this is the best description of an LLM that I've seen anywhere.<br> </div> Thu, 23 Oct 2025 11:07:23 +0000 Define “prompt” https://lwn.net/Articles/1042968/ https://lwn.net/Articles/1042968/ jvoss2 <div class="FormattedComment"> I want to second this: In my opinion, requests to document "the prompt" are not practical, because there are too many small bits of prompt, some typed by the user and some from sytem prompts, tool descriptions, pre-written guidance for sub-agents etc. Even if this could somehow all be bundled together in a digestible form, it would still not be very useful, because what the LLM does also depends on the state of the file system at the time the request was made. (Example: "Please review the code in somefile.c. [LLM thinks about it and reports back] Ok, please fix the integer overflow you found by adding an explicit check near the start of the function.")<br> </div> Thu, 23 Oct 2025 10:34:42 +0000 Define “prompt” https://lwn.net/Articles/1042964/ https://lwn.net/Articles/1042964/ taladar <div class="FormattedComment"> If you just want a CRUD app auto-generated LLMs seem like overkill, it is probably easy to do that with a regular template engine, possibly even with the simple ones in project template tools (e.g. cargo-generate, not sure about a Python equivalent)<br> </div> Thu, 23 Oct 2025 07:32:53 +0000 Define “prompt” https://lwn.net/Articles/1042948/ https://lwn.net/Articles/1042948/ raven667 <div class="FormattedComment"> That seems about right, I might also add that very simple usage of existing comprehensive frameworks seems like something LLMs should be able to cough up, like boilerplate describing how to make a simple CRUD app should have plenty of examples in the training data, so telling it what field names you want it should be possible to spit out a Django app, but I haven't tested that theory as I haven't touched LLMs, not even once. Maybe a freeform text frontend to ffmpeg invocations ;-)<br> </div> Wed, 22 Oct 2025 17:25:00 +0000 Define “prompt” https://lwn.net/Articles/1042614/ https://lwn.net/Articles/1042614/ pizza <div class="FormattedComment"> <span class="QuotedText">&gt; Which is not surprising. They are useful tools, once you know how they work. And a remarkable amout of code is not really doing anything that novel or unique.</span><br> <p> In other words, where LLMs are most useful is are twofold:<br> <p> * A successor to the boilerplate-generating development environment "Wizards" [1]<br> * Fancy autocomplete.<br> <p> <p> [1] Referring to interactive prompt-guided templating engines popularized by Microsoft in Visual&lt;whatever&gt; development environments in the early 90s.<br> </div> Mon, 20 Oct 2025 12:36:16 +0000 Define “prompt” https://lwn.net/Articles/1042601/ https://lwn.net/Articles/1042601/ ssmith32 <div class="FormattedComment"> That's a bit of a straw man argument. There are plenty of people who, like me, find it useful for simple transformations or generating boilerplate that, unfortunately, continues to persist in the codebase, for various and sundry reasons. But also recognize it can fail hilariously at simple tasks.<br> <p> I asked my claude-powered assistant to:<br> <p> - upgrade a library to a specific version. Instead, it updated an unrelated config value that had a similar name to the library to be the name of the library. The config file was most emphatically _not_ part of the build system. If LLMs truly understood "context" like people claim, it should have ruled out touching that file completely.<br> <p> - generate a bunch of boilerplate for writing out new objects to a datastore that still needs boilerplate. Mostly got it right.<br> <p> - generate a dockerfile for me. It saved time and worked, but added an unusual amount of completely useless cruft. Still faster to quickly remove it then make it myself from scratch.<br> <p> - how to install a particular java version on my mac. Utterly failed. Kept on insisting on using a cask that no longer exists, on downloading it from locations that no longer hosted that particular version, etc. It was clearly just barfing up the suggestions from a bunch of outdated blogs.<br> <p> For something that has similar patterns in your codebase, or has plenty of (correct) examples in documentation and random websites, it can do great.<br> <p> For something novel or unique, even if it is something as banal as updating a library version by understanding it's pulled in transitively, and another library must be updated - or something both unique and genuinely interesting, LLMs fail miserably.<br> <p> Which is not surprising. They are useful tools, once you know how they work. And a remarkable amout of code is not really doing anything that novel or unique.<br> <p> For conversations about design, a co-worker or rubber duck is still much better for me.<br> </div> Mon, 20 Oct 2025 08:35:55 +0000 Better than human (sometimes) https://lwn.net/Articles/1042591/ https://lwn.net/Articles/1042591/ gmatht <div class="FormattedComment"> While LLMs can produce rubbish, sometimes they can do a better job than me.<br> <p> Like all C programmers, I can write C in any language. Sometimes when I start writing C in Python the LLM will offer to complete my involved algorithm with a 2 line pythonic solution. Also the LLM's initial draft of a UI looks nicer than the functional but plain version I would call v1.0.<br> <p> I seem to recall a quote saying something along the lines of: I will always write better code than a compiler/LLM, because I can use a compiler/LLM.<br> <p> The biggest weakness of LLMs seems to be that it is not possible to reach v1 with vibe coding because once the code base reaches a certain level of quality the LLM will become more interested in adding new bugs than fixing old ones. For example, it will find a polished algorithm and observe that the tests only cover several values so it can simplify the algorithm by just hardcoding those values and still "pass".<br> </div> Mon, 20 Oct 2025 02:14:33 +0000 Define “prompt” https://lwn.net/Articles/1042335/ https://lwn.net/Articles/1042335/ SLi <div class="FormattedComment"> I agree. Often even better when you put some time into it.<br> <p> But I think writing clearly in a non-dialog setting is a skill that perhaps even most engineers lack. I think all engineers should be taught technical writing (I know my university didn't for me). Many don't even seem to realize it's a rather different skill set.<br> </div> Thu, 16 Oct 2025 16:04:22 +0000 Define “prompt” https://lwn.net/Articles/1042218/ https://lwn.net/Articles/1042218/ kleptog <div class="FormattedComment"> I find LLMs especially useful for finding what the big picture is. If I'm trying to working why something isn't working, it can give you the name of the component that probably has the issue and so then you can search for it.<br> <p> The first time I really saw this was when I was trying to do something with CodeMirror and was getting all sorts of conflicting advice from different sites. Eventually fed the errors to ChatGPT and it pointed out that version 5 and 6 use completely different configuration styles. No search engine would have told me that info. No website specifies which version they are using.<br> <p> And for one off scripts it's amazing. Hey, I need a script that does the steps X, Y and Z in Python. Here is the previous bash script that did this. And voila.<br> <p> Treat it like an idiot that knows everything and understands nothing. Because that's what it is... The trick is to combine your understanding with its knowledge.<br> </div> Thu, 16 Oct 2025 13:40:40 +0000 Define “prompt” https://lwn.net/Articles/1042212/ https://lwn.net/Articles/1042212/ iabervon <div class="FormattedComment"> I've been finding that typing the explanation like I was talking to coworkers in a group chat works just as well as saying it out loud, and putting it in a version-controlled file that I clear out before making a pull request often results in having some great phrasing to use in the documentation or commit message, even though the original form would be useless in organization outside of an unfinished topic branch. This also results in some great information when I come back to a preempted project a few months later and want to know what I said to the duck when I was working on it.<br> <p> Of course, it means I have a file in version control which says that it's a list of explanations of the issues I'm facing with features in progress, and then doesn't have anything else in any mainline commit.<br> </div> Thu, 16 Oct 2025 11:48:15 +0000 Define “prompt” https://lwn.net/Articles/1042202/ https://lwn.net/Articles/1042202/ taladar <div class="FormattedComment"> Oddly enough none of the people who "get a lot of good out of them" have ever made a video showing that off on Youtube or anywhere else that had a convincing result in terms of the ratio of effort to output quality.<br> </div> Thu, 16 Oct 2025 08:17:34 +0000 Define “prompt” https://lwn.net/Articles/1042197/ https://lwn.net/Articles/1042197/ Wol <div class="FormattedComment"> <span class="QuotedText">&gt; According to lore, some programmers talk to rubber ducks to solve their problems.</span><br> <p> I have a stuffed Tux on my desk for exactly that reason (although I rarely use it).<br> <p> But how often has explaining the problem to a colleague resulted in you solving it, often without a word from said colleague? That's why a rubber duck / stuffed Tux / whatever is such a useful debugging aid. It might feel weird holding a conversation with an inanimate object, but don't knock it. It works ...<br> <p> Cheers,<br> Wol<br> </div> Thu, 16 Oct 2025 07:03:41 +0000 Define “prompt” https://lwn.net/Articles/1042195/ https://lwn.net/Articles/1042195/ azumanga <div class="FormattedComment"> It's up to you of course, but if you haven't tried any of the more recent models, I'd give it a try.<br> <p> I was stuck with a slowly dying Python 2 program, which a few people had tried to update (and failed) to Python 3. I previously tried for 4 full days before I realised I was no-where close, and gave up.<br> <p> I sat for an afternoon with Claude Code, and finished a full Python 3 translation.<br> <p> Claude found replacement libraries for things without a Python 3 version, wrote fresh implementations of a couple of functions that didn't get a Python 3 upgrade (I checked, it didn't just copy the originals), and helped me then fix up all the unicode issues from the Python 2 -&gt; Python 3 upgrade process.<br> <p> </div> Thu, 16 Oct 2025 06:06:32 +0000 And if we use bugs in our model? https://lwn.net/Articles/1042189/ https://lwn.net/Articles/1042189/ davecb <div class="FormattedComment"> It's hard enough to statistically train from one geometric and one algebraic model. Now make one of them buggy.<br> <p> Rinse, repeat.<br> </div> Thu, 16 Oct 2025 01:20:37 +0000 Define “prompt” https://lwn.net/Articles/1042180/ https://lwn.net/Articles/1042180/ SLi <div class="FormattedComment"> Yes, Google search is very hilarious, especially if you mean the "people also ask" results :)<br> </div> Wed, 15 Oct 2025 21:19:22 +0000 Define “prompt” https://lwn.net/Articles/1042178/ https://lwn.net/Articles/1042178/ SLi <div class="FormattedComment"> Even in the early ChatGPT days when I wouldn't have considered asking it to produce code I found this an excellent way to flesh and simplify designs. Not because they were right; often in fact because their ridiculous solutions made me think of approaches that I would have otherwise missed.<br> <p> According to lore, some programmers talk to rubber ducks to solve their problems. Well, even GPT-3 was definitely more than a rubber duck. Not necessarily 10x better, but still better. These recent models? I think they're genuinely useful also in domains that you don't know so well. An example (I could also give another from a domain I knew even less about, book binding, but this message is already long):<br> <p> I've been taking a deep dive into Rust for the past few days, and I don't think how I would replace the crate and approach suggestions I've got from LLMs. Probably the old-fashioned way, reading enough rust code to see what people do today, but I'm sure that would have been several times the effort. The same applies to them digging quickly the reason why a particular snippet makes the borrow checker unhappy and suggesting an alternative. One does not easily learn to search for `smallvec` without having ever heard of it.<br> <p> Or, today, diving into the interaction of process groups, sessions, their interaction with ptys (which I didn't know well), and "why on earth do I end up with a process tree like this"—the LLM taught be about subreapers, which I did not know and would not have easily guessed to search for.<br> <p> I think one problem is that people get angry if LLMs are not right 100% of the time. Even that seems a bit like "you're using it wrong". Don't rely on it to be right all the time. (As a side note, don't rely on humans to be either, unless they say very little.) Rely on it to give a big picture fast, which is where you might be after some time of self-study while still harboring misconceptions to be corrected—and much preferable to having no idea.<br> </div> Wed, 15 Oct 2025 21:18:45 +0000 Define “prompt” https://lwn.net/Articles/1042179/ https://lwn.net/Articles/1042179/ Wol <div class="FormattedComment"> Except every time I've tried to make it clearer, the AI just digs itself deeper into the same hole.<br> <p> Okay, the only AI I've (knowingly) used is Google search. And at least it has the decency to rephrase my query into the query it's going to answer (which it then answers pretty well). It's just that the question it's answering bears precious little resemblance to the question I asked it.<br> <p> Cheers,<br> Wol<br> </div> Wed, 15 Oct 2025 20:54:33 +0000 Define “prompt” https://lwn.net/Articles/1042176/ https://lwn.net/Articles/1042176/ SLi <div class="FormattedComment"> <span class="QuotedText">&gt; Unless you’ve only ever used ChatGPT, you will know that LLM-produced code is not the result of a single prompt, not even a conversation, but rather a workflow that often goes as such:</span><br> <p> Even with ChatGPT this should be the case.<br> <p> I've come to suspect that the usual difference between people who insist LLMs are absolutely useless and those who get a lot of good out of them is likely exactly that: Take a human who's likely not even very good at communicating textually (few of us are; technical writing is a discipline for a reason), have him write a single sloppy prompt and dismiss the results when the LLM was not able to read his mind.<br> </div> Wed, 15 Oct 2025 20:39:12 +0000 Define “prompt” https://lwn.net/Articles/1042136/ https://lwn.net/Articles/1042136/ Baughn <div class="FormattedComment"> It’s really not much effort. If you’re doing your job right, you should already have written designs for whatever features you’re coding.<br> <p> LLMs just force it, since they don’t work well without a plan. You can rely on them reading your mind. <br> <p> And I don’t know. Is a 5x increase in project scope worthwhile? Because that’s what I’ve been getting. <br> </div> Wed, 15 Oct 2025 14:25:11 +0000 Libre AI? https://lwn.net/Articles/1041983/ https://lwn.net/Articles/1041983/ pabs <div class="FormattedComment"> Was there any discussion of what a Libre ML model, LLM or AI could look like?<br> <p> Personally I like Debian's document about that:<br> <p> <a href="https://salsa.debian.org/deeplearning-team/ml-policy/">https://salsa.debian.org/deeplearning-team/ml-policy/</a><br> <p> It would be very useful to have at least some of the former, for things like human language translation, noise removal from audio, text to speech, speech to text and so on.<br> </div> Wed, 15 Oct 2025 04:42:41 +0000 Define “prompt” https://lwn.net/Articles/1041980/ https://lwn.net/Articles/1041980/ mussell <div class="FormattedComment"> That sounds like way too much effort compared to a standard edit-compile-debug cycle without any LLM and costs of orders of magnitude more power to boot. What's the benefit of these things again? Do we really want to outsource our thinking that much?<br> </div> Wed, 15 Oct 2025 04:21:36 +0000 Define “prompt” https://lwn.net/Articles/1041964/ https://lwn.net/Articles/1041964/ Baughn <div class="FormattedComment"> Unless you’ve only ever used ChatGPT, you will know that LLM-produced code is not the result of a single prompt, not even a conversation, but rather a workflow that often goes as such:<br> <p> - Discuss a problem with the LLM. The LLM autonomously reads large parts of the repository you’re working in, during the discussion. <br> <p> - Ask it to write a plan. Edit the plan. Ask it about the edited plan. Edit it some more.<br> <p> - Repeatedly restart the LLM, asking it to code different parts of the plan. Debug the results. Write some code yourself. Create, rebase, or otherwise play around with the repository; keep multiple branches of potential code. <br> <p> - Go back and edit the original plan, now that you know what might work. Port some unit tests back in time, sometimes. <br> <p> - Repeat until done. <br> <p> There is a prompt. Actually, there are many prompts, all conveniently stored in verbose JSONL that also requires point in time snapshots of the repository you’re working in to make sense of. <br> <p> If someone were to ask me for that, I wouldn’t know where to start. It’s like asking for a recording of my desktop so they can be sure I’m not doing something they disapprove of. <br> </div> Wed, 15 Oct 2025 02:13:59 +0000 Now can we? https://lwn.net/Articles/1041930/ https://lwn.net/Articles/1041930/ gwolf <div class="FormattedComment"> Can we programmers actually «cite any inspirations for code we write»? Do we often do that?<br> Be it that I learnt programming at school or by reading books, or that I took a "BootCamp", I cannot usually said where I got a particular construct from. I could, of course, say that I write C in the K&amp;R style — but I doubt that's what Siewicz refers to. And of course, Perl-heads will recognize a "Schwartzian transform". But in general, I learnt _how to code_, and I am not able to attribute specific constructs of my programming to specific bits of code. Just like an LLM.<br> <p> If most of my programming consisted of searching for answers to a question related to mine in StackOverflow... I *could* get persuaded to link to the post in question in a comment before each included snippet. But that's also not something I've seen to be frequent. And if I didn't write the comment _the same moment_ I included said snippet, it's most likely I never will.<br> <p> So... I think there is an argumentative issue in here :-)<br> </div> Tue, 14 Oct 2025 16:43:57 +0000