Define “prompt”
Define “prompt”
Posted Oct 15, 2025 4:21 UTC (Wed) by mussell (subscriber, #170320)In reply to: Define “prompt” by Baughn
Parent article: The FSF considers large language models
Posted Oct 15, 2025 14:25 UTC (Wed)
by Baughn (subscriber, #124425)
[Link] (7 responses)
LLMs just force it, since they don’t work well without a plan. You can rely on them reading your mind.
And I don’t know. Is a 5x increase in project scope worthwhile? Because that’s what I’ve been getting.
Posted Oct 15, 2025 21:18 UTC (Wed)
by SLi (subscriber, #53131)
[Link] (5 responses)
According to lore, some programmers talk to rubber ducks to solve their problems. Well, even GPT-3 was definitely more than a rubber duck. Not necessarily 10x better, but still better. These recent models? I think they're genuinely useful also in domains that you don't know so well. An example (I could also give another from a domain I knew even less about, book binding, but this message is already long):
I've been taking a deep dive into Rust for the past few days, and I don't think how I would replace the crate and approach suggestions I've got from LLMs. Probably the old-fashioned way, reading enough rust code to see what people do today, but I'm sure that would have been several times the effort. The same applies to them digging quickly the reason why a particular snippet makes the borrow checker unhappy and suggesting an alternative. One does not easily learn to search for `smallvec` without having ever heard of it.
Or, today, diving into the interaction of process groups, sessions, their interaction with ptys (which I didn't know well), and "why on earth do I end up with a process tree like this"—the LLM taught be about subreapers, which I did not know and would not have easily guessed to search for.
I think one problem is that people get angry if LLMs are not right 100% of the time. Even that seems a bit like "you're using it wrong". Don't rely on it to be right all the time. (As a side note, don't rely on humans to be either, unless they say very little.) Rely on it to give a big picture fast, which is where you might be after some time of self-study while still harboring misconceptions to be corrected—and much preferable to having no idea.
Posted Oct 16, 2025 7:03 UTC (Thu)
by Wol (subscriber, #4433)
[Link] (2 responses)
I have a stuffed Tux on my desk for exactly that reason (although I rarely use it).
But how often has explaining the problem to a colleague resulted in you solving it, often without a word from said colleague? That's why a rubber duck / stuffed Tux / whatever is such a useful debugging aid. It might feel weird holding a conversation with an inanimate object, but don't knock it. It works ...
Cheers,
Posted Oct 16, 2025 11:48 UTC (Thu)
by iabervon (subscriber, #722)
[Link] (1 responses)
Of course, it means I have a file in version control which says that it's a list of explanations of the issues I'm facing with features in progress, and then doesn't have anything else in any mainline commit.
Posted Oct 16, 2025 16:04 UTC (Thu)
by SLi (subscriber, #53131)
[Link]
But I think writing clearly in a non-dialog setting is a skill that perhaps even most engineers lack. I think all engineers should be taught technical writing (I know my university didn't for me). Many don't even seem to realize it's a rather different skill set.
Posted Oct 16, 2025 13:40 UTC (Thu)
by kleptog (subscriber, #1183)
[Link] (1 responses)
The first time I really saw this was when I was trying to do something with CodeMirror and was getting all sorts of conflicting advice from different sites. Eventually fed the errors to ChatGPT and it pointed out that version 5 and 6 use completely different configuration styles. No search engine would have told me that info. No website specifies which version they are using.
And for one off scripts it's amazing. Hey, I need a script that does the steps X, Y and Z in Python. Here is the previous bash script that did this. And voila.
Treat it like an idiot that knows everything and understands nothing. Because that's what it is... The trick is to combine your understanding with its knowledge.
Posted Oct 23, 2025 11:07 UTC (Thu)
by nye (subscriber, #51576)
[Link]
I think this is the best description of an LLM that I've seen anywhere.
Posted Oct 20, 2025 2:14 UTC (Mon)
by gmatht (subscriber, #58961)
[Link]
Like all C programmers, I can write C in any language. Sometimes when I start writing C in Python the LLM will offer to complete my involved algorithm with a 2 line pythonic solution. Also the LLM's initial draft of a UI looks nicer than the functional but plain version I would call v1.0.
I seem to recall a quote saying something along the lines of: I will always write better code than a compiler/LLM, because I can use a compiler/LLM.
The biggest weakness of LLMs seems to be that it is not possible to reach v1 with vibe coding because once the code base reaches a certain level of quality the LLM will become more interested in adding new bugs than fixing old ones. For example, it will find a polished algorithm and observe that the tests only cover several values so it can simplify the algorithm by just hardcoding those values and still "pass".
Posted Oct 16, 2025 6:06 UTC (Thu)
by azumanga (subscriber, #90158)
[Link]
I was stuck with a slowly dying Python 2 program, which a few people had tried to update (and failed) to Python 3. I previously tried for 4 full days before I realised I was no-where close, and gave up.
I sat for an afternoon with Claude Code, and finished a full Python 3 translation.
Claude found replacement libraries for things without a Python 3 version, wrote fresh implementations of a couple of functions that didn't get a Python 3 upgrade (I checked, it didn't just copy the originals), and helped me then fix up all the unicode issues from the Python 2 -> Python 3 upgrade process.
Define “prompt”
Define “prompt”
Define “prompt”
Wol
Define “prompt”
Define “prompt”
Define “prompt”
Define “prompt”
Better than human (sometimes)
Define “prompt”
