K
K
khipster2016-03-25 10:56:17
Programming
khipster, 2016-03-25 10:56:17

AlphaGo, developed by Google, has beaten the champion of Go. What was the complexity of building a game algorithm?

Kasparov in chess was beaten by a computer a long time ago, and there was a hype about Go, they say, the computer will never win.
Wiki:
The AlphaGo program combines the Monte Carlo method for tree search[en] (MCST) together with the use of convolutional neural networks for deep learning of position estimation and the most advantageous moves. The essence of this method (named by analogy with the Monte Carlo method in computational mathematics) is that first positions are chosen on the current board that can be taken, and then starting sequentially from each of them, a large number of random games are played. The position that gives the highest ratio of wins to losses is chosen for the next turn. (See the Monte Carlo Methods section in Computer Go). Prior to AlphaGo, the most successful Go programs used the Monte Carlo method[1].
In short, a simple enumeration of options (smart, optimized) but still the same enumeration, i.e. the difficulty was simply in computing power?

Answer the question

In order to leave comments, you need to log in

4 answer(s)
V
vchc, 2016-03-25
@vchc

The current state of affairs in AI is such that most of the problems are not scientific, but engineering and organizational. Allocate resources for people / equipment, organize the process, select architecture / methods / heuristics, implement it programmatically. In such conditions, the question of the economic feasibility of projects always arises. For Google, such expediency took place. They likely received more from advertising than they spent. Bright events have a beneficial effect on corporate management when making decisions about concluding contracts.

I
Ivan, 2016-03-25
@LiguidCool

The main difficulty is a much greater variability of moves (much more chess). Just play go.

I
ivodopyanov, 2016-03-31
@ivodopyanov

In fact, the main idea of ​​Deep Mind, with which they taught Atari computer games and Go, is that a large table can be approximated quite well by a neural network.
For example, a computer sees two different positions, but due to processing by a neural network, it “understands” that they are very, very similar. For example, obtained by a banal shift along one of the axes. This means that the correct solutions will be almost the same.

J
jewubinin, 2016-10-08
@jewubinin

Simple brute force works great in chess, but doesn't work in Go. There was also difficulty in this.

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question