Truly understanding programming
Truly understanding programming
Posted Jul 25, 2024 9:56 UTC (Thu) by khim (subscriber, #9252)In reply to: Truly understanding programming by rgmoore
Parent article: Imitation, not artificial, intelligence
> LLMs are good now, but they won't fool anyone who's making a serious effort to determine if their interlocutor is human or machine.
Yes, yes, they would. I doubt Turing ever anticipated that his test would be passed in such a crazy way, but that's what happened: “new generation” acts and talks more and more like LLMs do!
It's not clear whether that's just lack of experience and they would outgrow the threshold that the likes of ChatGPT 4 set or not, but the truth is out there: it's more-or-less impossible to distinguish gibberish that ChatGPT produces from gibberish that many humans are producing!
And if you start raising the bar to exclude “idiots” from the test then it's no longer a Turing test, but something different.
> It's not just that their language doesn't sound quite right; they just can't hold up a conversation for long without drifting into nonsense.True for some humans as well when you are asking them to talk about something they are not experts in. Especially older ones.
LLMs, today, are in a very strange place: they couldn't do anything well but they can do many things badly. But most human workers are also performing badly, the expert ones are very-very rare novadays!
Posted Jul 26, 2024 14:43 UTC (Fri)
by atnot (subscriber, #124910)
[Link]
This seems like a silly criteria. If you can't reliably distinguish the gibberish created by a brick falling onto a keyboard from the gibberish created by a cat sitting on one, that doesn't make the brick as intelligent as a cat.
Truly understanding programming
