Ryan Gariepy

You are currently browsing articles tagged Ryan Gariepy.

You don’t need conscious machines to wreak havoc upon the world; Weak AI can cause serious disruptions in employment and autonomous machines can be tasked with lethal work. Nikola Tesla dreamed of military drones bringing peace to the world, but that hasn’t been the reality. If some government (or rogue state) allows pilotless planes to operate automatically, the weapons systems might be even deadlier. Of course, with the human track record for mass violence, that might not be so. From Robert McMillan at Wired:

Military drones like the Predator currently are controlled by humans, but [Clearpath CTO Ryan] Gariepy says it wouldn’t take much to make them fully automatic and autonomous. That worries him. A lot. “The potential for lethal autonomous weapons systems to be rolled off the assembly line is here right now,” he says, “but the potential for lethal autonomous weapons systems to be deployed in an ethical way or to be designed in an ethical way is not, and is nowhere near ready.”

For Gariepy, the problem is one of international law, as well as programming. In war, there are situations in which the use of force might seem necessary, but might also put innocent bystanders at risk. How do we build killer robots that will make the correct decision in every situation? How do we even know what the correct decision would be?

We’re starting to see similar problems with autonomous vehicles. Say a dog darts across a highway. Does the robo-car swerve to avoid the dog but possibly risk the safety of its passengers? What if it isn’t a dog, but a child? Or a school bus? Now imagine a battle zone. “We can’t agree on how to implement those bits of guidance on the car,” Gariepy says. “And now what we’re actually talking about is taking that leap forward to building a system which has to decide on its own and when it’s going to preserve life and when it’s going to take lethal force.”•

 

Tags: ,