Google’s stream of the 5-game Go series between DeepMind’s AlphaGo and Lee Sedol was odd. It put little vector-graphic landmarks from Seoul opposite little vector-graphic landmarks from London. But I never once heard it suggested that this was a battle between Korea and the UK. Maybe it would have been more appropriate to put a brain on one side and a processor on the other, but that’s equally inaccurate. It may not seem it at first, but AlphaGo and its victory represents human effort and human progress. While we still have “the machines” under control, they are tools for our own advancement.
Not only was AlphaGo programmed, tested, and assisted by a team of humans, the games of Go it studied are our games. Demis Hassabis, DeepMind co-founder (and former AI programmer at recently dissolved Lionhead Studios), explained after the fourth game that none of the games AlphaGo had learned from were Lee’s—they were all “strong amateur games from online servers.” The machine needed “tens of millions” of games to form the knowledge base it would extrapolate on, and while I wouldn’t call myself a strong amateur, I might know one or two people whose games AlphaGo has studied.
One of the reasons the Google DeepMind team set up the series between AlphaGo and Lee Sedol was because they were hoping their AI would lose. They weren’t “rooting” for the other side, but they didn’t have a handle on just how strong AlphaGo was, and only by playing against the best of the best could they determine where the program’s weaknesses were. After Game 2, Lee Sedol said he couldn’t find any weaknesses either, but that night, he and several other high-level players stayed up studying its moves and mistakes until the wee hours of the morning. He did lose the third game, but eventually the complex amashi strategy he tried in game four gained him a win. The fifth game saw him try something similar and struggle to stay ahead of AlphaGo, which pulled ahead in the end.
Professional Go player Michael Redmond said that to call Lee’s 4-1 defeat the “end of Go” is wrongheaded: humans will always find it compelling to use the game to intellectually challenge themselves and others, and this series has led more people to pay attention to Go than anything else in the last half century. Said Redmond, “As far as the game itself is concerned, I truly believe that if AlphaGo continues to improve… we’ll be seeing it play moves that are new to human professionals, and people will imitate or emulate it.”
As the Go world changes, and the AlphaGo team figures out how to train their AI to avoid making some of the mistakes seen in the series, DeepMind will begin to collaborate with the UK’s National Health Service. Hassabis says they’ll start with fixing a lot of the old NHS software before moving onto bringing in machine learning techniques, but maybe eventually, DeepMind’s organization of our healthcare data will lead to discoveries we couldn’t make, hitting peaks we couldn’t reach alone.