Peter Asaro

You are currently browsing articles tagged Peter Asaro.

It would be great to ban autonomous-weapons systems, but you don’t really get to govern too far into the future from the present. Our realities won’t be tomorrow’s, and I fear that sooner or later the possible becomes the plausible. Hopefully, we can at least kick that can far enough down the road so that everyone will be awakened to the significant risks before they’ve been realized. As Peter Asaro makes clear in a Scientific American essay, there will be grave consequences should warfare be robotized. An excerpt:

Autonomous weapons pose serious threats that, taken together, make a ban necessary. There are concerns whether AI algorithms could effectively distinguish civilians from combatants, especially in complex conflict environments. Even advanced AI algorithms would lack the situational understanding or the ability to determine whether the use of violent force was appropriate in a given circumstance or whether the use of that force was proportionate. Discrimination and proportionality are requirements of international law for humans who target and fire weapons but autonomous weapons would open up an accountability gap. Because humans would no longer know what targets an autonomous weapon might select, and because the effects of a weapon may be unpredictable, there would be no one to hold responsible for the killing and destruction that results from activating such a weapon.

Then, as the Future of Life Institute letter points out, there are threats to regional and global stability as well as humanity. The development of autonomous weapons could very quickly and easily lead to arms races between rivals. Autonomous weapons would reduce the risks to combatants, and could thus reduce the political risks of going to war, resulting in more armed conflicts. Autonomous weapons could be hacked, spoofed and hijacked, and directed against their owners, civilians or a third party. Autonomous weapons could also initiate or escalate armed conflicts automatically, without human decision-making. In a future where autonomous weapons fight autonomous weapons the results would be intrinsically unpredictable, and much more likely lead to the mass destruction of civilians and the environment than to the bloodless wars that some envision. Creating highly efficient automated violence is likely to lead to more violence, not less.

There is also a profound moral question at stake.•

 

Tags: