“Who Would Not Think That A Good Use Of Technology?”

futureworld_sage

The American military has thus far refused to consider using autonomous weapons system, which is good, but is it our choice alone to make? If one world power (or a smaller, rogue nation aspiring to be one) were to deploy such machines, how would others resist? The technology is trending toward faster, cheaper and more out of control, so it’s not difficult to imagine such a scenario. I think in the long run these systems are inevitable, but hopefully there will be much more time to prepare for what they’ll mean.

In a Financial Times column, John Thornhill writes of fears of LAWS (Lethal Autonomous Weapons Systems), which could fall into the wrong hands, like warlords or tyrants. Of course, it’s easy to make the argument that all hands are the wrong ones. The opening:

Imagine this futuristic scenario: a US-led coalition is closing in on Raqqa determined to eradicate Isis. The international forces unleash a deadly swarm of autonomous, flying robots that buzz around the city tracking down the enemy.

Using face recognition technology, the robots identify and kill top Isis commanders, decapitating the organisation. Dazed and demoralised, the Isis forces collapse with minimal loss of life to allied troops and civilians.

Who would not think that a good use of technology?

As it happens, quite a lot of people, including many experts in the field of artificial intelligence, who know most about the technology needed to develop such weapons.

In an open letter published last July, a group of AI researchers warned that technology had reached such a point that the deployment of Lethal Autonomous Weapons Systems (or Laws as they are incongruously known) was feasible within years, not decades. Unlike nuclear weapons, such systems could be mass produced on the cheap, becoming the “Kalashnikovs of tomorrow.”•

Tags: