Computer scientists long labored (and, ultimately, successfully) to make machines superior at backgammon or chess, but they had to teach that AI the rules and moves, designing the brute force of their strikes. Not so with Google’s new game-playing computer which can muster a mean game of Space Invaders or Breakout with no coaching. It’s Deep Learning currently focused on retro pastimes, but soon enough it will be serious business. From Rebecca Jacobson at PBS Newshour:
This isn’t the first game-playing A.I. program. IBM supercomputer Deep Blue defeated world chess champion Garry Kasparov in 1997. In 2011, an artificial intelligence computer system named Watson won a game of Jeopardy against champions Ken Jennings and Brad Rutter.
Watson and Deep Blue were great achievements, but those computers were loaded with all the chess moves and trivia knowledge they could handle, [Demis] Hassabis said in a news conference Tuesday. Essentially, they were trained, he explained.
But in this experiment, designers didn’t tell DQN how to win the games. They didn’t even tell it how to play or what the rules were, Hassabis said.
“(Deep Q-network) learns how to play from the ground up,” Hassabis said. “The idea is that these types of systems are more human-like in the way they learn. Our brains make models that allow us to learn and navigate the world. That’s exactly the type of system we’re trying to design here.”•
Tags: Demis Hassabis, Rebecca Jacobson