It’s too early to say if DeepMind’s obliteration of Go champion Lee Se-dol will prove to have wide-ranging applications that go far beyond the board, but it does show the prowess of self-learning AI. AlphaGo played millions of games on its own, and it could easily play billions more and improve further. Practice may not make perfect, but it can seriously diminish mistakes.
As AI expert Stuart Russell says in an AFP article, the triumph “shows that the methods we do have are even more powerful than we first thought.” That means we can move faster sooner, but where we’re headed is undetermined. An excerpt:
Until just five months ago, computer mastery of the 3,000-year-old game of Go, said to be the most complex ever invented, was thought to be at least a decade off.
But then AlphaGo beat European Go champ Fan Hui, and its creators decided to test the programme’s real strength against Lee, one of the game’s all-time masters.
Game-playing is a crucial measure of AI progress — it shows that a machine can execute a certain “intellectual” task better than humans.
Advance for science
A key test was when IBM’s Deep Blue defeated chess grandmaster Garry Kasparov in 1997.
The game of Go is more complex than chess, and has more possible board configurations than there are atoms in the Universe.
Part of the reason for AlphaGo’s success is that it is partly self taught — having played millions of games against itself after initial programming to figure out the game and hone its tactics through trial and error.
“It is not the beginning of the end of humanity. At least if we decide we want to aim for safe and beneficial AI, rather than just highly capable AI,” Oxford University future technology specialist Anders Sandberg said of Lee’s drubbing.
“But there is still a lot of research that needs to be done to get things right enough that we can trust (and take pride in!) our AIs.”
Tags: Stuart Russell