My takeaway from the comment in the heading, offered by BMW executive Ian Robertson, is a little different than Leonid Bershidsky’s interpretation in Bloomberg View. Bershidsky believes complex moral questions about responsibility for the actions of autonomous vehicles means that humans may never truly be able to let go of the wheel. Wow, never is a long time. The inference I draw is that if moral philosophy is the chief concern of auto-industry executives, that says they believe the technology is fait accompli. Sure, they could be wrong, but if the machinery is perfected, moral quandaries won’t keep such cars permanently parked. We’ll just be forced to answer difficult questions sooner than later.
Self-driving cars are the subject of more hype than even true artificial intelligence, perhaps because they already exist and a number of big companies are committed to making them a marketable reality. So it’s worth listening when a top executive of one of these companies says self-driving vehicles are a long way off.
“The technology will be held back by the ultimate moral question on who’s responsible,” said Ian Robertson, head of sales for Bayerische Motoren Werke in Munich.
Figuring this out isn’t as easy as simply changing insurance rules. Imagine you’re driving along a narrow mountain road at high speed, and a child jumps in front of your car. If you swerve to avoid hitting him, you’ll crash into a cliff or plunge into an abyss. In both cases, it means certain death for you.
Now imagine the car is driving itself.
“An algorithm will make a decision which might not be acceptable from a cultural or societal point of view,” Robertson explained.•