I don’t think a questionable Turing Test means we should be granting robots marriage licenses or social security cards, but there are ethical and legal questions to be addressed as society becomes increasingly automated and performance-enhancement on a grand scale becomes widespread. From Mark Goldfeder’s CNN piece “The Age of Robots Is Here“:
“Robotic legal personhood in the near future makes sense. Artificial intelligence is already part of our daily lives. Bots are selling stuff on eBay and Amazon, and semiautonomous agents are determining our eligibility for Medicare. Predator drones require less and less supervision, and robotic workers in factories have become more commonplace. Google is testing self-driving cars, and General Motors has announced that it expects semiautonomous vehicles to be on the road by 2020.
When the robot messes up, as it inevitably will, who exactly is to blame? The programmer who sold the machine? The site owner who had nothing to do with the mechanical failure? The second party, who assumed the risk of dealing with the robot? What happens when a robotic car slams into another vehicle, or even just runs a red light?
Liability is why some robots should be granted legal personhood. As a legal person, the robot could carry insurance purchased by its employer. As an autonomous actor, it could indemnify others from paying for its mistakes, giving the system a sense of fairness and ensuring commerce could proceed unchecked by the twin fears of financial ruin and of not being able to collect. We as a society have given robots power, and with that power should come the responsibility of personhood.
From the practical legal perspective, robots could and should be people.“