Brad Templeton, who is a consultant to Google on its development of driverless cars thinks that thought-experiment the Trolley Problem being applied to robocars has more value as a philosophical exercise than in practical application. While these ethical quandaries certainly do exist, computer-aided driving will be much safer and the net result will be far fewer accidents and fatalities. From his post:
“Often this is mapped into the robocar world by considering a car which is forced to run over somebody, and has to choose who to run over. Choices suggested include deciding between:
- One person and two
- A child and an adult
- A person and a dog
- A person without right-of-way vs others who have it
- A deer vs. adding risk by swerving around it into the oncoming lane
- The occupant or owner of the car vs. a bystander on the street
- The destruction of an empty car vs. injury to a person who should not be on the road, but is.
I don’t want to pretend that this isn’t an interesting moral area, and it will indeed affect the law, liability and public perception. And at some point, programmers will evaluate these scenarios in their efforts. What I reject is the suggestion that this is high on the list of important issues and questions. I think it’s high on the list of questions that are interesting for philosophical debate, but that’s not the same as reality.
In reality, such choices are extremely rare. How often have you had to make such a decision, or heard of somebody making one? Ideal handling of such situations is difficult to decide, but there are many other issues to decide as well.
Secondly, in the rare situations where a human encounters such a moral dilemma, that person does not sit there and have an inner philosophical dialog on which is the most moral choice. Rather, they will go with a quick gut reaction, which is based on their character and their past thinking on such situations.”
Tags: Brad Templeton