How would this work for books?
How would this work for books?
Posted Jul 18, 2021 0:38 UTC (Sun) by NYKevin (subscriber, #129325)In reply to: How would this work for books? by developer122
Parent article: GitHub is my copilot
There's zero evidence that a human understands the code they write, either. Yes, you can ask the human questions about it and judge their responses, but if we can't train a chatterbot to answer such questions now, it will likely be possible within the next few years (compare and contrast GPT-3), at which point everyone will decide that "answering simple questions about code" no longer qualifies as "evidence." Unlike, say, a news article (for which GPT-3 still struggles to distinguish between fantasy and reality), all of the relevant information is contained within the source code itself, so there's no external reality which it has to know about or understand (aside from straightforward vocabulary issues such as "what do humans call this design pattern?"). Therefore, it's much less difficult, and may already be possible with existing text generation systems.
This is the same process that chess went through, of course. You can't ask Stockfish or AlphaZero "Why is this a good/bad move?" except indirectly, by asking "What is White/Black's best response to this move, and Black/White's reply, and so on?" (at which point, it will happily show you how one side wins the other's queen in some convoluted 15+ move line that you could never have found on your own). But nobody would seriously argue that humans have a deeper understanding of chess than those engines, merely because the engines are unable to verbalize their reasoning in simple terms. On the other hand, when an engine had just barely defeated Kasparov in the 80's, everyone abruptly decided that computers excelling at chess was no longer a sign of intelligence.
TL;DR: "Real" AI just means "anything that AI can't do yet."
Posted Jul 18, 2021 5:46 UTC (Sun)
by gfernandes (subscriber, #119910)
[Link] (7 responses)
That, by the way, was Deep Blue.
I doubt anyone could describe Kasparov in quite the same way!
Posted Jul 18, 2021 6:16 UTC (Sun)
by Cyberax (✭ supporter ✭, #52523)
[Link] (4 responses)
Deep Blue used a "classic" algorithm, with simple recursive search and a fine-tuned weight function. We can understand how it works.
Modern neural-net chess programs beat classic algorithms. And it's not even close. They work exactly like human brain, by recognizing patterns.
Posted Jul 18, 2021 23:50 UTC (Sun)
by khim (subscriber, #9252)
[Link] (3 responses)
Actually Stockfish only lost to Leela Chess Zero once. Otherwise it keeps rank #1 pretty robustly. Of course the fact that relatively-simple (and resource hungry since modern CPU is not well-designed to support neural networks) pattern-recognizing program beats literally everything else and only the top engine based literally on everything humanity discovered about chess in hundreds of years can hold it's own is still amazing.
Posted Jul 19, 2021 1:31 UTC (Mon)
by Cyberax (✭ supporter ✭, #52523)
[Link]
Though to be fair it didn't participate in regular computer chess tournaments and Stockfish got better.
Posted Jul 19, 2021 9:00 UTC (Mon)
by sandsmark (guest, #62172)
[Link] (1 responses)
Hasn't Stockfish merged a neural net evaluator? Or was that after the tournament?
Posted Jul 19, 2021 9:37 UTC (Mon)
by khim (subscriber, #9252)
[Link]
It returned the crown in 18th season and got neural networks in 19th. I think “neural network revolution” is similar to “demise of assembler” in the end of last century. Hand-written assembler was still worse then what high-level languages produced, but development time was so drastically different that it was impossible for assembler developers to deliver anything fast enough for it to be competitive. Stockfish uses ƎUИИ to deal with some fringe cases where they just don't have time to fine-tune the algorithm. The fate of the computer chess (and the world, arguably) depends on whether chip developers would be able to develop massive 3D chips (with thousands and later, maybe even millions, of layers… Moore's law turned 90 degrees, in a sense). For now this is only used for flash (but memories always used new technologies first because they need a lot of transistors but have very simple structure), but if active components will follow then it would be demise of modern computing parading and rise of neural networks. This is because of power consumption: you couldn't put mullion cores into one chip while keeping them at gigahertz range, the whole thing would consume so much power it would be impossible to cool it (even if you find a way to supply all that power). Modern programming techniques couldn't work in trillions of 1MHz cores — but neural networks can. Whether this would be enough to create strong AI or not… nobody knows.
Posted Jul 19, 2021 0:18 UTC (Mon)
by NYKevin (subscriber, #129325)
[Link] (1 responses)
I am not sure I agree with that.
A classical chess engine is essentially made up of three parts:
1. An opening book that describes standard lines in opening theory.
High-level chess players will absolutely memorize the same information as is present in an opening book, although perhaps not to the same depth as the engine does. Similarly, human players do imagine future lines and evaluate their endpoints based on heuristics, using a process that is conceptually similar to minimax with aggressive pruning. Finally, the best human players spend a lot of time learning their endgames. They don't memorize an entire tablebase, of course, but they learn the patterns, and so this can be characterized as a particularly smart compression algorithm (i.e. I don't need to memorize hundreds of minor translations or rotations of the same basic mating pattern).
Posted Jul 19, 2021 6:14 UTC (Mon)
by gfernandes (subscriber, #119910)
[Link]
How would this work for books?
How would this work for books?
> Modern neural-net chess programs beat classic algorithms. And it's not even close.
How would this work for books?
How would this work for books?
How would this work for books?
How would this work for books?
How would this work for books?
2. A tree search (minimax) algorithm for the midgame. This also requires the use of a heuristic evaluation function to cut off searching before it gets too deep. In modern engines, this evaluation function is "smart" and considers the relative positions of the pieces and pawns as well as their material values.
3. An endgame tablebase that gives you the exact lines to play in any position where N or fewer pieces are on the board. For modern engines, N=7 is generally the limit (at least for publicly-available datasets, anyway), but in Kasparov's day, N would have been much smaller.
How would this work for books?