Making information more accessible
Making information more accessible
Posted Jul 24, 2024 13:45 UTC (Wed) by paulj (subscriber, #341)In reply to: Making information more accessible by karath
Parent article: Imitation, not artificial, intelligence
(And hell, even if fed on a body of high quality tutorials and example code it will still bullshit! See the "ChatGPT is bullshit" paper, which another commenter pointed to in the Meta LLM open source article).
Posted Jul 24, 2024 15:40 UTC (Wed)
by epa (subscriber, #39769)
[Link] (13 responses)
Posted Jul 24, 2024 16:26 UTC (Wed)
by somlo (subscriber, #92421)
[Link] (11 responses)
Careful with the "No True Scotsman" iterations on the Turing test there... :)
Unlike Turing's mathematical work, the imitation game was merely a thought experiment we all sort of rolled with over the years. The idea is to build a Turing machine (i/o + finite state machine) that would be able to *fake* it when judged by some "average person" interacting with it, and we can say LLMs have mostly succeeded in that regard.
It is concievable that in the future, a more complex (but still finite) state machine might be able to fake understanding how to program based on some rules (if you think about it, Searle's Chinese Room is just another Turing machine: i/o + state machine, that is, in this case, able to fake understanding Chinese to a Chinese speaker standing outside the room).
However, true intelligence isn't when the Turing machine can make *you* believe it's intelligent. Rather, it happens when the machine thinks to *itself* that it is intelligent -- the whole Descartes "cogito ergo sum" thing). As to how *we* get to test for that, I don't think science has that figured out yet... :)
Posted Jul 24, 2024 18:27 UTC (Wed)
by k3ninho (subscriber, #50375)
[Link]
I'm glad you're happy with that state of play; I'm not, a web page has a reference to itself and almost any program more complicated will do too, so almost any more-complicated machine intelligence will parrot "cogito ergo sum" so I figure this is another case of "GOTO considered harmful." The more I'm asked to credit machinery and animals with smarts, the more I find: social and tool-using corvids who also train their young to recognise helpful and harmful humans, orcas in pods training and sustaining their young while attacking, ants collectively finding optimal paths, primates (and dogs and cats) learning sign language... there's more intelligence in the world than our culture can give credit to. Which brings to the next point, if we're not giving credit to animal intelligence, how are we going to spot machine intelligence emerging from our computer systems?
>As to how *we* get to test for that, I don't think science has that figured out yet... :)
K3n.
Posted Jul 24, 2024 21:46 UTC (Wed)
by Wol (subscriber, #4433)
[Link]
And something nature has figured out, but we haven't, is that in order to be intelligent, we need special-purpose hardware. For example, most people have special-purpose face recognition hardware. That is also hard-wired to name-association.
There's a whole bunch of other hardwired special-purpose systems. All of which are in their own small way doing their best to make sure that the General Purpose Brain is kept firmly in step with the "reality" outside. The thing about these LLMs is they are mathematical models that don't care about that reality. So intelligence is impossible as they retreat inside their model and disappear deeper and deeper into their alternative reality.
Who was it wrote that story about traumatised soldiers who kept on disappearing and reappearing? And when the Generals discovered they were (apparently) disappearing into the past and things like that they thought it would be brilliant - as a weapon of war - to go back in history and rewrite it. Until they discovered that the Roman Army wore wristwatches ...
AI will very soon (if it isn't already) be living in that world, not ours, until we find some way of telling the AI watches didn't exist back then...
Cheers,
Posted Jul 24, 2024 22:16 UTC (Wed)
by epa (subscriber, #39769)
[Link] (5 responses)
Posted Jul 25, 2024 9:18 UTC (Thu)
by khim (subscriber, #9252)
[Link] (4 responses)
Right now these models couldn't even correctly implement algorithm they describe in any language they already know. The only way for them to do that correctly if they saw full solution somewhere, which makes them perfect for cheating and frauds, but not all that suitable for real work. Except maybe for some repeatable work that we have been trying to eliminate for decades. Which is cool, but more of evolutionary progress on the road that IDEs started on decades ago than something entirely new.
Posted Jul 25, 2024 10:14 UTC (Thu)
by paulj (subscriber, #341)
[Link]
Given that (if that claim about Turing equivalence is true, as I've read), then there is a way to take that algorithm and describe it - both in terms of a more "normal" computer language and human language.
Posted Jul 26, 2024 4:09 UTC (Fri)
by raven667 (subscriber, #5198)
[Link] (2 responses)
Which is a funny point because they could extend IDE auto-complete functionality and be a useful addition but the insistence that _all_ the work needs to be done by the LLM and the lack of feedback makes it worse because it's output is not being cross referenced with all that the LSP knows about the code in your editor, letting it create sample code which references methods/arguments/etc. which don't exist. Similar to how LLMs can't actually perform math calculations, because the developers don't build in an escape hatch to just run a calculator when a math expression exists in the input, they are so fixated and proud of what the LLM appears to do that they choose not to build in the proper feedback mechanisms to use a more appropriate tool when appropriate.
Posted Jul 26, 2024 7:35 UTC (Fri)
by khim (subscriber, #9252)
[Link]
Actually that's what they have tried to do and are still trying to do in most companies but then OpenAI released ChatGPT and the hype train started. After that they just had no time to properly integrate anything, everyone else just had to show that they can do cool things, too. Even if they couldn't, just yet.
Posted Jul 26, 2024 14:34 UTC (Fri)
by atnot (subscriber, #124910)
[Link]
Yes, I've been thinking this ever since I've seen people using github copilot. Our editors are already pretty darn good at finding out what the next valid thing can be. And they're basically guaranteed to be correct too, no "hallucination" nonsense. They're just not particularly good at sorting those options by relevance right now. There's no need to train these huge models to output code from nothing to get these advantages? Surely you could make some sort of model to score autocomplete suggestions by surrounding context that would be at least as good if not better than what microsoft is offering. And it surely wouldn't require the energy consumption of a small nation either. I'd use that in a heartbeat. But nobody seems to be interested in that sort of thing because it's not going to "replace programmers" or whatever.
Posted Jul 25, 2024 4:20 UTC (Thu)
by rgmoore (✭ supporter ✭, #75)
[Link] (2 responses)
That's not actually what Turing said, though. The original "imitation game" was intended to be played with a skeptical judge doing their best to tell which interviewee was a human and which was a machine. LLMs are good now, but they won't fool anyone who's making a serious effort to determine if their interlocutor is human or machine. It's not just that their language doesn't sound quite right; they just can't hold up a conversation for long without drifting into nonsense. Their creators have tried to mask this by limiting the length of their output so they can get new human input to respond to, but it still shows up.
Not that passing a Turing test was ever intended to be the one true test of intelligence. It was intended to be an extremely stringent test that should satisfy even people who were most skeptical of the idea of machine intelligence. After all, if we really can't distinguish a machine from a human, it's hard to argue that the machine hasn't achieved human-level intelligence.
Posted Jul 25, 2024 9:56 UTC (Thu)
by khim (subscriber, #9252)
[Link] (1 responses)
Yes, yes, they would. I doubt Turing ever anticipated that his test would be passed in such a crazy way, but that's what happened: “new generation” acts and talks more and more like LLMs do! It's not clear whether that's just lack of experience and they would outgrow the threshold that the likes of ChatGPT 4 set or not, but the truth is out there: it's more-or-less impossible to distinguish gibberish that ChatGPT produces from gibberish that many humans are producing! And if you start raising the bar to exclude “idiots” from the test then it's no longer a Turing test, but something different. True for some humans as well when you are asking them to talk about something they are not experts in. Especially older ones. LLMs, today, are in a very strange place: they couldn't do anything well but they can do many things badly. But most human workers are also performing badly, the expert ones are very-very rare novadays!
Posted Jul 26, 2024 14:43 UTC (Fri)
by atnot (subscriber, #124910)
[Link]
This seems like a silly criteria. If you can't reliably distinguish the gibberish created by a brick falling onto a keyboard from the gibberish created by a cat sitting on one, that doesn't make the brick as intelligent as a cat.
Posted Jul 24, 2024 21:49 UTC (Wed)
by Wol (subscriber, #4433)
[Link]
Try asking it to write a program in DataBASIC :-) The only reports I've heard about people trying that is that it's been a complete disaster competing for the "how many pages of printout can you generate from one syntax error" prize (something my fellow students did at school - they were programming in FORTRAN).
Cheers,
Truly understanding programming
Truly understanding programming
Truly understanding programming
I think this is backwards. The question is one that the applicant for personhood asks us and our scientists: "What do I have to do so you will treat me like a person?"
Truly understanding programming
Wol
Truly understanding programming
Truly understanding programming
Truly understanding programming
Truly understanding programming
Truly understanding programming
Truly understanding programming
Truly understanding programming
The idea is to build a Turing machine (i/o + finite state machine) that would be able to *fake* it when judged by some "average person" interacting with it, and we can say LLMs have mostly succeeded in that regard.
> LLMs are good now, but they won't fool anyone who's making a serious effort to determine if their interlocutor is human or machine.
Truly understanding programming
Truly understanding programming
Truly understanding programming
Wol
