When it comes to biotech or AI or military robotization, many think we can make sober decisions which will limit these amazingly powerful tools from becoming the most destructive weapons imaginable. But progress does not often proceed in an orderly fashion. Priorities can differ wildly from state to state and corporation to corporation, and decisions made by one actor can impact what others do. You can say you never want to tinker with genetics to radically improve human intelligence, but you may feel differently if another nation decides to. And this technology is likely moving too fast to for its negative potential to be unnaturally constrained by legislation.
In a Washington Post column, Vivek Wadhwa and Aaron Johnson write wisely on the the topic of military automation. They encourage a complete ban on such systems, but even smaller states will eventually be able to disrupt such an accord. An excerpt:
The technology is still imperfect, but it is becoming increasingly accurate — and lethal. Deep learning has revolutionized image classification and recognition and will soon allow these systems to exceed the capabilities of an average human soldier.
But are we ready for this? Do we want Robocops policing our cities? The consequences, after all, could be very much like we’ve seen in dystopian science fiction. The answer surely is no.
For now, the U.S. military says that it wants to keep a human in the loop on all life-or-death decisions. All of the drones currently deployed overseas fall into this category: They are remotely piloted by a human (or usually multiple humans). But what happens when China, Russia and rogue nations develop their autonomous robots and acquire with them an advantage over our troops? There will surely be a strong incentive for the military to adopt autonomous killing technologies.
The rationale then will be that if we can send a robot instead of a human into war, we are morally obliged to do so, because it will save lives — at least, our soldiers’ lives, and in the short term.•