“Isn’t it pretty to think so?” is the question that completes The Sun Also Rises, and it’s the one that comes to my mind when someone suggests that America or any other country or entity will be able to control machines that kill autonomously. It’s possible to largely keep a hood over nukes because of the rareness of the materials and expertise needed to create them, but that won’t be the way of drones, robots and other automatons of destruction. They’ll be easy, scarily easy, to make. And so inexpensive. That practically won’t cost a thing.
In her very well-written piece “The Case Against Killer Robots,” Denise Garcia of Foreign Affairs argues that it’s possible to halt “progress.” The opening:
“Wars fought by killer robots are no longer hypothetical. The technology is nearly here for all kinds of machines, from unmanned aerial vehicles to nanobots to humanoid Terminator-style robots. According to the U.S. Government Accountability Office, in 2012, 76 countries had some form of drones, and 16 countries possessed armed ones. In other words, existing drone technology is already proliferating, driven mostly by the commercial interests of defense contractors and governments, rather than by strategic calculations of potential risks. And innovation is picking up. Indeed, China, Israel, Russia, the United Kingdom, the United States, and 50 other states have plans to further develop their robotic arsenals, including killer robots. In the race to build such fully autonomous unmanned systems, China is moving faster than anyone; it exhibited 27 different armed drone models in 2012. One of these was an autonomous air-to-air supersonic combat aircraft.
Several countries have already deployed forerunners of killer robots. The Samsung Techwin security surveillance guard robots, which South Korea uses in the demilitarized zone it shares with North Korea, can detect targets through infrared sensors. Although they are currently operated by humans, the robots have an automatic feature that can detect body heat in the demilitarized zone and fire with an onboard machine gun without the need for human operators. The U.S. firm Northrop Grumman has developed an autonomous drone, the X-47B, which can travel on a preprogrammed flight path while being monitored by a pilot on a ship. It is expected to enter active naval service by 2019. Israel, meanwhile, is developing an armed drone known as the Harop that could select targets on its own with a special sensor, after loitering in the skies for hours.
Militaries insist that such hardware protects human life by taking soldiers and pilots out of harm’s way. But the risk of malfunctions from failed software or cyber attacks could result in new dangers altogether. Countries will have dissimilar computer programs that, when interacting with each other, may be erratic. Further, signal jamming and hacking become all the more attractive — and more dangerous — as armies increasingly rely on drones and other robotic weaponry. According to killer robot advocates, removing the human operator could actually solve some of those problems, since killer robots could ideally operate without touching communication networks and cyberspace. But that wouldn’t help if a killer robot were successfully hacked and turned against its home country.
The use of robots also raises an important moral question. As Noel Sharkey, a British robotics expert, has asked: ‘Are we losing our humanity by automating death?’ Killer robots would make war easier to pursue and declare, given the distance between combatants and, in some cases, their removal from the battlefield altogether. Automated warfare would reduce long-established thresholds for resorting to violence and the use of force, which the UN has carefully built over decades. Those norms have been paramount in ensuring global security, but they would be easier to break with killer robots, which would allow countries to declare war without having to worry about causing casualties on their own side.”