Artificial intelligence and autonomous weapons are two potentially very different technologies, which raises many complexities. AI is the simulation of human intelligence processes by machines. These processes include “learning (the acquisition of information and rules for using the information), reasoning (using the rules to reach approximate or definite conclusions), and self-correction”[1] AI can perform tasks that normally require human intelligence, such as: visual perception, speech recognition, decision-making, and translation between languages.

Autonomous Weapons Systems (AWS) are defined by the U.S. Department of Defense (D.O.D.) as “a weapon system(s) that, once activated, can select and engage targets without further intervention by a human operator.”[2] AWS are interesting because unlike the material and technologies necessary for guided missiles or nuclear weapons, right now AI is only as powerful as what the Pentagon seeks to harness and it is already deeply woven into our brains and used in everyday life. The D.O.D.’s science research division reinforces the concept that AI within autonomous robotic systems will be a critical part of the United States’ ongoing defense strategy. The D.O.D.’s report perceives tactical advantages from purely self-driven machines and humans working together in the field. For example, in one scenario (figure 2), a group of drones with AI would congregate above a combat zone to autonomously fire against the enemy, provide real-time surveillance of the area and jam enemy communications.

Fully autonomous weapons can be empowered to assess the situational context on a battleground and to decide on the necessary attack according to the information processed; this is where AI is introduced. Since the vital unique standard of human reasoning is the capability to set ends and goals, the AWS proposes for the first time the likelihood of eradicating the human operator from the battleground. A Pentagon executive says that weapons with AI must employ “appropriate levels of human judgment.” Yet scientists and human rights experts argue that the standard is far too broad and have urged that such weapons be subject to “meaningful human control.”[3] The development of AWS technology signifies the potential for a conversion in the structure of war that qualitatively differs from previous military technological innovations.

Today, drones with the ability to operate on their own can be effortlessly be armed, but the drone’s behavior is not yet predictable enough for highly fluid situations to be deployed safely. Once the AI is advanced enough to do that, telling it whom or what to shoot is simple as weapons programmed to strike only specific kinds of targets already exist. Yet the cognitive technology, when successfully developed, is improbable to remain solely in American hands as technologies constructed do not normally remain secret, and numerous are now universal, operating everything from the internet to self-driving cars. Right now, our concern should be more focused on ensuring the public’s safety, rather than mongering fear and opposing such new technologies.

 

Until next time,

ASW
Join me on LinkedIn
Follow me on Twitter

 

[1] Rouse, Margaret. “What Is AI (Artificial Intelligence)? – Definition from WhatIs.com.” SearchCIO. TechTarget, n.d.

[2] “The Ethics of Autonomous Weapons Systems.” Penn Law. University of Pennsylvania Law School, Nov. 2014.

[3] Roff, Heather M. “Meaningful Human Control or Appropriate Human Judgment? The Necessary Limits on Autonomous Weapons.” (2015): 2. Global Security. Arizona State University.