“This Takes The Robot-Car Industry Down Legally And Morally Dangerous Paths”

Research has shown that a gentrifying neighborhood rocked by a shocking headline murder doesn’t stop gentrifying. The glaring, but rare, tragedy isn’t enough to reverse progress. The rewards of having a piece of an interesting, but still affordable, community outweighs the risks. I think the same will be true of autonomous vehicles, which will make the streets and highways far safer even if occasionally there’s a loud crash.

One of the biggest moral quandaries about driverless cars is one on the margins: When a collision is imminent, software, not humans, would make the decision of who is likely to live and who is to die. I would think the fairest scenario would be to aim for the best outcome for the greatest number of those involved. But perhaps car owners will be able to opt into a “moral system” the way they can choose organ donation. Maybe they’ll be an insurance break for those who do. Who knows? It’s likely, though, that this decision, like the steering wheel itself, won’t be in our hands.

In Patrick Lin’s new Wired article, The Robot Car of Tomorrow May Just Be Programmed to Hit You,” he analyzes all aspects of this ethical problem. An excerpt:

“Suppose that an autonomous car is faced with a terrible decision to crash into one of two objects. It could swerve to the left and hit a Volvo sport utility vehicle (SUV), or it could swerve to the right and hit a Mini Cooper. If you were programming the car to minimize harm to others–a sensible goal–which way would you instruct it go in this scenario?

As a matter of physics, you should choose a collision with a heavier vehicle that can better absorb the impact of a crash, which means programming the car to crash into the Volvo. Further, it makes sense to choose a collision with a vehicle that’s known for passenger safety, which again means crashing into the Volvo.

But physics isn’t the only thing that matters here. Programming a car to collide with any particular kind of object over another seems an awful lot like a targeting algorithm, similar to those for military weapons systems. And this takes the robot-car industry down legally and morally dangerous paths.

Even if the harm is unintended, some crash-optimization algorithms for robot cars would seem to require the deliberate and systematic discrimination of, say, large vehicles to collide into. The owners or operators of these targeted vehicles would bear this burden through no fault of their own, other than that they care about safety or need an SUV to transport a large family. Does that sound fair?”

Tags: