“Did Your Robot Car Make The Right Decision?”

The Trolley Problem as applied to autonomous cars is currently one of the most popular legal and philosophical exercises. I think it may be somewhat overstated since the average human driver routinely tries to swerve from a crash, and that may be the default setting for robocars in what will hopefully be a world of far fewer potential crashes. But the issue of liability will still need to be worked out. From Patrick Lin’s Wired piece about the possibility of “adjustable ethics settings”:

“Do you remember that day when you lost your mind? You aimed your car at five random people down the road. By the time you realized what you were doing, it was too late to brake.

Thankfully, your autonomous car saved their lives by grabbing the wheel from you and swerving to the right. Too bad for the one unlucky person standing on that path, struck and killed by your car.

Did your robot car make the right decision? This scene, of course, is based on the infamous ‘trolley problem‘ that many folks are now talking about in AI ethics. It’s a plausible scene, since even cars today have crash-avoidance features: some can brake by themselves to avoid collisions, and others can change lanes too.

The thought-experiment is a moral dilemma, because there’s no clearly right way to go. It’s generally better to harm fewer people than more, to have one person die instead of five. But the car manufacturer creates liability for itself in following that rule, sensible as it may be. Swerving the car directly results in that one person’s death: this is an act of killing. Had it done nothing, the five people would have died, but you would have killed them, not the car manufacturer which in that case would merely have let them die.

Even if the car didn’t swerve, the car manufacturer could still be blamed for ignoring the plight of those five people, when it held the power to save them. In other words: damned if you do, and damned if you don’t.

So why not let the user select the car’s ‘ethics setting’? 

Tags: