Last week I was able to host and facilitate a multi-stakeholder meeting of governments, industry and academia to discuss the notions of “meaningful human control” and “appropriate human judgment” as they pertain to the development, deployment and use of autonomous weapons systems (AWS).  These two concepts presently dominate discussion over whether to regulate or ban AWS, but neither concept is fully endorsed internationally, despite work from governments, academia and NGOs.  On one side many prefer the notion of “control,” and on the other “judgment.”

Yet what has become apparent from many of these discussions, my workshop included, is that there is a need for an appropriate analogy to help policy makers understand the complexities of autonomous systems and how humans may still exert control over them.   While some argue that there is no analogy to AWS, and that thinking in this manner is unhelpful, I disagree.  There is one unique example that can help us to understand the nuance of AWS, as well how meaningful human control places limits on their use: marine mammal systems .

The marine mammal system is the team of marine mammal, human trainer, and any sensor or equipment that each uses.  They are not “weaponized” in the US Navy case, but are used predominantly for mine countermeasures, very shallow water mine hunting and neutralization, object recovery, and, in some limited circumstances, diver interdiction.  These animals, usually bottlenose dolphins or sea lions, interact very personally with their trainers/handlers (they are the same person in this case) and together they form one unit.  Once they are deployed by their trainer, they work autonomously to complete their task.  Given that these animals are deployed all over the world, this means they work in open water, outside of the eyes of their trainers.  Their trainers must trust and rely on their animals to do exactly as they are ordered to do.

Unlike canines and their handlers, this team works apart.  The animals are out of the line of sight from the trainer/handler, and the animals adapt to various circumstances or changes in the environment on their own without direction.  In essence, these animals work in the exact same way that proponents of AWS want them to work: within the bounds of commander’s intent; to obey orders but be flexible enough to adapt to changing circumstances; to have reliability and predictability of behaviors to preclude unlawful or immoral behaviors; to operate without communications and beyond the line of sight; and to have the human make all the “important” decisions.

I say “important” here because in the case of the autonomous weapon system there is a gray area between a human commander’s decision to deploy a system after she has made the requisite proportionality calculation and taken all feasible precautions to avoid or minimize incidental loss of civilian life, injury or damage during an attack and when the system reaches the area it was directed to and makes “factual determinations, such as whether to fire the weapon or to select and engage a target.”  Where the human’s decision to deploy and the system’s decision to engage differ is the largest point of contention.  The latter can easily slide into place to override or make moot the former.

I’ve testified to various states that the US marine mammal system exemplifies the concept of meaningful human control.  This is because, I think, the animal and the human do not make up an object—like a weapon or a tool—but a team.  They are independent heterogeneous and autonomous units engaged the pursuit of a common objective.  Each has its own roles and tasks, but the human’s task is to ensure the proper training, certifications, and deployments or assignments.

Moreover, in the case of the marine mammal system, the human trainer/handler also requires training, certifications, discipline and supervision from various authorities.  Thus at any one point in time, the failure of the system (both human and animal) can be due to a variety of factors or combinations of factors.   Indeed, the failure is not merely (and is unlikely) the animal’s fault.  Take, for instance, Florida v. Harris.  Here Justice Kagan found that a trained or certified dog’s “sniff” was sufficient to establish a probable cause for a search because the dog’s certification and continued training are an adequate indication of its reliability.  One requires those certifications, training logs, and contact hours to have “continued training.”  Thus a dog cannot be continually trained by itself.  It requires constant interaction with its handler.  An autonomous system—one that learns and adapts—will be no different.  Unlike the munition that is created and stockpiled in some warehouse, an autonomous system needs continual training in a real world environment to meet the standards of reliability and predictability.

If this is so, then we need to rethink fundamentally the autonomous weapons debate.  This is because they are not conventional munitions only subject to the considerations of discrimination, proportionality, superfluous injury and unnecessary suffering.  Rather they lie somewhere between means and methods of war and agents of war.  They are object, agent, means and method.  They cannot be reviewed under traditional weapons review processes because they are not static munitions; they are deciding munitions.  Indeed, they are closer to the dolphin than they are to the landmine, and so we should really rethink what it means to be an “autonomous” weapon in war, and how we have devised systems for training and control of such autonomous agents already.  If I am correct, then there are profound implications for how one could regulate such weapons, as well as the bounds that we would be obligated to put on these systems in time, distance and space.

 

Note Bene: for all sources regarding the United States Naval Marine Mammal Program contact author.