|
|
Subscribe / Log in / New account

Define “prompt”

Define “prompt”

Posted Oct 15, 2025 21:18 UTC (Wed) by SLi (subscriber, #53131)
In reply to: Define “prompt” by Baughn
Parent article: The FSF considers large language models

Even in the early ChatGPT days when I wouldn't have considered asking it to produce code I found this an excellent way to flesh and simplify designs. Not because they were right; often in fact because their ridiculous solutions made me think of approaches that I would have otherwise missed.

According to lore, some programmers talk to rubber ducks to solve their problems. Well, even GPT-3 was definitely more than a rubber duck. Not necessarily 10x better, but still better. These recent models? I think they're genuinely useful also in domains that you don't know so well. An example (I could also give another from a domain I knew even less about, book binding, but this message is already long):

I've been taking a deep dive into Rust for the past few days, and I don't think how I would replace the crate and approach suggestions I've got from LLMs. Probably the old-fashioned way, reading enough rust code to see what people do today, but I'm sure that would have been several times the effort. The same applies to them digging quickly the reason why a particular snippet makes the borrow checker unhappy and suggesting an alternative. One does not easily learn to search for `smallvec` without having ever heard of it.

Or, today, diving into the interaction of process groups, sessions, their interaction with ptys (which I didn't know well), and "why on earth do I end up with a process tree like this"—the LLM taught be about subreapers, which I did not know and would not have easily guessed to search for.

I think one problem is that people get angry if LLMs are not right 100% of the time. Even that seems a bit like "you're using it wrong". Don't rely on it to be right all the time. (As a side note, don't rely on humans to be either, unless they say very little.) Rely on it to give a big picture fast, which is where you might be after some time of self-study while still harboring misconceptions to be corrected—and much preferable to having no idea.


to post comments

Define “prompt”

Posted Oct 16, 2025 7:03 UTC (Thu) by Wol (subscriber, #4433) [Link] (2 responses)

> According to lore, some programmers talk to rubber ducks to solve their problems.

I have a stuffed Tux on my desk for exactly that reason (although I rarely use it).

But how often has explaining the problem to a colleague resulted in you solving it, often without a word from said colleague? That's why a rubber duck / stuffed Tux / whatever is such a useful debugging aid. It might feel weird holding a conversation with an inanimate object, but don't knock it. It works ...

Cheers,
Wol

Define “prompt”

Posted Oct 16, 2025 11:48 UTC (Thu) by iabervon (subscriber, #722) [Link] (1 responses)

I've been finding that typing the explanation like I was talking to coworkers in a group chat works just as well as saying it out loud, and putting it in a version-controlled file that I clear out before making a pull request often results in having some great phrasing to use in the documentation or commit message, even though the original form would be useless in organization outside of an unfinished topic branch. This also results in some great information when I come back to a preempted project a few months later and want to know what I said to the duck when I was working on it.

Of course, it means I have a file in version control which says that it's a list of explanations of the issues I'm facing with features in progress, and then doesn't have anything else in any mainline commit.

Define “prompt”

Posted Oct 16, 2025 16:04 UTC (Thu) by SLi (subscriber, #53131) [Link]

I agree. Often even better when you put some time into it.

But I think writing clearly in a non-dialog setting is a skill that perhaps even most engineers lack. I think all engineers should be taught technical writing (I know my university didn't for me). Many don't even seem to realize it's a rather different skill set.

Define “prompt”

Posted Oct 16, 2025 13:40 UTC (Thu) by kleptog (subscriber, #1183) [Link]

I find LLMs especially useful for finding what the big picture is. If I'm trying to working why something isn't working, it can give you the name of the component that probably has the issue and so then you can search for it.

The first time I really saw this was when I was trying to do something with CodeMirror and was getting all sorts of conflicting advice from different sites. Eventually fed the errors to ChatGPT and it pointed out that version 5 and 6 use completely different configuration styles. No search engine would have told me that info. No website specifies which version they are using.

And for one off scripts it's amazing. Hey, I need a script that does the steps X, Y and Z in Python. Here is the previous bash script that did this. And voila.

Treat it like an idiot that knows everything and understands nothing. Because that's what it is... The trick is to combine your understanding with its knowledge.


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds