If Go is mentioned in the US, it’s in the context of complicated games, or hard games, or games with some element of “purity.” It’s just white stones and black stones on a nineteen by nineteen board. You play by putting stones down, not moving them, if you surround your opponent’s stones they are “captured,” and that’s more or less it. Ostensibly, no game could be simpler. But if you’ve ever tried to learn how to play Go, you’ll know it feels a lot more like the spoiled-for-choice paralysis of staring at a blank page
The board has 361 spaces, all of them technically available to start, even if there are specific areas make for good (or popular) moves. With so many choices available, it becomes difficult for a beginner to assess them all, and for a long time, this was the problem for computers too: machines had to spend time and effort assessing moves a professional human player has the experience to know not to play. The number of legal Go positions (~2*10^170) is higher than the number of atoms in the observable universe (10^80). So the brute force method (which is sort of available for Chess and Checkers) really was off-limits for machines.
I know most of what I know about Go because of how engrossed I got in a 65-year old novel, Yasunari Kawabata’s Master of Go (1951). I figured out how to play, and played mostly against computers and puzzle books. I watched YouTube videos of professional games, followed the move sets available online.
In line with current trends in computer science, AlphaGo approaches the problem sort of like I did. It learned to play Go when the Google DeepMind team behind the algorithm fed it vast swathes of professional games to watch and analyze. Then they had it play against other Go algorithms, and against itself, over and over and over. Now it can beat other Go-playing machines 99.8% of the time, and it just beat the European champion—Fan Hui—in a decisive 5-0 set, with no handicaps.
Google Maps learns when you like to leave for work, Siri learns how you mumble, so what AlphaGo is doing isn’t totally new for our computers. But, from a Go perspective, it’s a twist on an old tradition. Kawabata’s novel describes the way the professional Go world works—there’s a ladder-based system with ranks and numbers that determine who has to play whom to prove they’re the best, or what each game means for the ranking (and career) of a professional player. These professional players take on apprentices, and learn by playing each other. The novel centers around two players—a young upstart and an old master—who each represent ways of thinking about Japan in the aftermath of World War II.
The young man wins, the older man dies, and Kawabata implies that with this defeat comes the death of an older way of behaving. It doesn’t mean the same thing to be a Go master anymore. There will never be another player quite like the older man.
In the US, at least, there was a lot of commotion around DeepBlue facing off against Gary Kasparov in 1997, as if the future of the human race were in the balance. Some people believed the game was too complex, involved too much lateral thinking for a computer to “understand” it. It took a while, but now, even if you’re any good at Chess, your smartphone can beat you while making obnoxious quips and telling you what your friends are out doing while you try to prove you can keep up with a machine the size of a deck of cards.
AlphaGo is scheduled to play Lee Sedol, the world’s current best in March, and Sedol appears to be confident that he can beat it. I assume he’ll be studying his old games in the meantime.