Stuart Russell

You are currently browsing articles tagged Stuart Russell.

It’s too early to say if DeepMind’s obliteration of Go champion Lee Se-dol will prove to have wide-ranging applications that go far beyond the board, but it does show the prowess of self-learning AI. AlphaGo played millions of games on its own, and it could easily play billions more and improve further. Practice may not make perfect, but it can seriously diminish mistakes.

As AI expert Stuart Russell says in an AFP article, the triumph “shows that the methods we do have are even more powerful than we first thought.” That means we can move faster sooner, but where we’re headed is undetermined. An excerpt:

Until just five months ago, computer mastery of the 3,000-year-old game of Go, said to be the most complex ever invented, was thought to be at least a decade off.

But then AlphaGo beat European Go champ Fan Hui, and its creators decided to test the programme’s real strength against Lee, one of the game’s all-time masters.

Game-playing is a crucial measure of AI progress — it shows that a machine can execute a certain “intellectual” task better than humans.

Advance for science

A key test was when IBM’s Deep Blue defeated chess grandmaster Garry Kasparov in 1997.

The game of Go is more complex than chess, and has more possible board configurations than there are atoms in the Universe.

Part of the reason for AlphaGo’s success is that it is partly self taught — having played millions of games against itself after initial programming to figure out the game and hone its tactics through trial and error.

“It is not the beginning of the end of humanity. At least if we decide we want to aim for safe and beneficial AI, rather than just highly capable AI,” Oxford University future technology specialist Anders Sandberg said of Lee’s drubbing.

“But there is still a lot of research that needs to be done to get things right enough that we can trust (and take pride in!) our AIs.”

Tags:

At Nature, a quartet of researchers write of their concerns as Artificial Intelligence matures, worrying about our robot brethren contributing to warmongering and income inequality. In “Take a Stand on AI Weapons,” Berkeley computer professor Stuart Russell focuses on the former, questioning the wisdom of Lethal Automated Weapons Systems. An excerpt:

The artificial intelligence (AI) and robotics communities face an important ethical decision: whether to support or oppose the development of lethal autonomous weapons systems (LAWS).

Technologies have reached a point at which the deployment of such systems is — practically if not legally — feasible within years, not decades. The stakes are high: LAWS have been described as the third revolution in warfare, after gunpowder and nuclear arms.  

Autonomous weapons systems select and engage targets without human intervention; they become lethal when those targets include humans. LAWS might include, for example, armed quadcopters that can search for and eliminate enemy combatants in a city, but do not include cruise missiles or remotely piloted drones for which humans make all targeting decisions. …

In my view, the overriding concern should be the probable endpoint of this technological trajectory. The capabilities of autonomous weapons will be limited more by the laws of physics — for example, by constraints on range, speed and payload — than by any deficiencies in the AI systems that control them. For instance, as flying robots become smaller, their manoeuvrability increases and their ability to be targeted decreases. They have a shorter range, yet they must be large enough to carry a lethal payload — perhaps a one-gram shaped charge to puncture the human cranium. Despite the limits imposed by physics, one can expect platforms deployed in the millions, the agility and lethality of which will leave humans utterly defenceless. This is not a desirable future.•

Tags: