We worried for a long time about someone pushing the button, dropping the big one, ending the whole thing. But what if the buttons are pushing themselves? What if there are no buttons? From Anders Sandberg at Practical Ethics:
“Can we run warfare without anybody being responsible? I do not claim to understand just war theory or the other doctrines of ethics of war. But as a computer scientist I do understand the risks of relying on systems that (1) nobody is truly responsible for, (2) cannot be properly investigated and corrected. Since presumably the internal software will be secret (since much of the military utility of autonomous systems will likely be due to their “smarts”) outside access or testing will be limited. The behavior of complex autonomous systems in contact with the real world can also be fundamentally unpredictable, which means that even perfectly self-documenting machines may not give us useful information to prevent future mis-behaviors.
Getting redress against a ‘mistake’ appears far harder in the case of a drone killing a group of civilians than by a gunship crew; if the mistake was due to an autonomous system it is likely that the threshold will be even higher. Even from a pragmatic perspective of creating disincentives for sloppy warfare the remote and diffused responsibility insulates the prosecuting state. In fact, we are perhaps obsessing too much about the robot part and too little about the extrajudicial part of heavily automated modern warfare.”
Tags: Anders Sandberg