As robots proliferate, we’re going require far more than three laws to govern their actions. The questions are seemingly endless, and the answers will likely have to be very elastic. The opening of an Economist report about the RoboLaw group’s recently released findings:
“WHEN the autonomous cars in Isaac Asimov’s 1953 short story ‘Sally’ encourage a robotic bus to dole out some rough justice to an unscrupulous businessman, the reader is to believe that the bus has contravened Asimov’s first law of robotics, which states that ‘a robot may not injure a human being or, through inaction, allow a human being to come to harm.’
Asimov’s ‘three laws’ are a bit of science-fiction firmament that have escaped into the wider consciousness, often taken to be a serious basis for robot governance. But robots of the classic sort, and bionic technologies that enhance or become part of humans, raise many thorny legal, ethical and regulatory questions. If an assistive exoskeleton is implicated in a death, who is at fault? If a brain-computer interface is used to communicate with someone in a vegetative state, are those messages legally binding? Can someone opt to replace their healthy limbs with robotic prostheses?
Questions such as these are difficult to anticipate. The concern for policymakers is creating a regulatory and legal environment that is broad enough to maintain legal and ethical norms but is not so proscriptive as to hamper innovation.”