Ethical Robots on the Battlefield?

2 March 2017, 1614 EST

Every day it seems we hear more about the advancements of artificial intelligence (AI), the amazing progress in robotics, and the need for greater technological improvements in defense to “offset” potential adversaries.   When all three of these arguments get put together, there appears to be some sort of magic alchemy that results in widely fallacious, and I would say pernicious, claims about the future of war.  Much of this has to do, ultimately, with a misunderstanding about the limitations of technology as well as an underestimation of human capacities.   The prompt for this round of techno-optimism debunking is yet another specious claim about how robotic soldiers will be “more ethical” and thus “not commit rape […] on the battlefield.”

There are actually three lines of thought here that need unpacking.   The first involves the capabilities of AI with relation to “judgment.”  As our above philosopher contends, “I don’t think it would take that much for robot soldiers to be more ethical.  They can make judgements more quickly, they’re not fearful like human beings and fear often leads people making less than optional decisions, morally speaking [sic].”  This sentiment about speed and human emotion (or lack thereof) has underpinned much of the debate about autonomous weapons for the last decade (if not more).  Dr. Hemmingsen’s views are not original.  However, such views are not grounded in reality.

First, even the most advanced artificial agents (that is AIs that can learn and “reason” about plans, and “make decisions” about potential courses of actions to pursue a plan or a goal) are about as intelligent as an ant.  (And even ants may be more intelligent.) That is, they can navigate based on cues from the environment or their reward function, they can figure out patterns, and they can “reason” about how best to achieve a particular goal given limited information, some sensibility of their environment and a set of “directions,” like an algorithm or their architecture.   But even getting an agent to learn a skill—like navigation—and transfer it to another task without catastrophically forgetting everything it has learned is the cutting edge of AI.  So how is a robot soldier, that is, an entity delegated a series of tasks in a complex environment (i.e. war zone,) supposed to make “better” judgments than a human about a set of parameters (or we can call them “ethics”) that are not predefined but in abstract terms?  The simple answer is that it cannot.   For as Richard Feynman noted, “What I cannot create, I do not understand.”   If we cannot identify the “right” action to do, then an AI cannot either, and I dare say that we have debated what “right” looks like for over two millennia.  So let us eschew the question of compliance with the laws of war and focus on the ethics here.

More to the point, if we are to build the brains of a robot solider, it is going to have to know a lot of things about a lot of very abstract values before it can even “judge” what the ethically right action is.   This requires at its very basic levels: an enormous amount of training data on the order of 100s of millions of observations, extraordinary computing power to run the training and get the system to learn, the right architecture and the right algorithm to get it to learn, and THEN! once deployed the ability of humans to understand the system to see that what it did was not a war crime, but REALLY the right action.   (Now how tractable do driverless cars look as a moral problem?)

Second, even if we grant that the technology is not anywhere close to being able to handle such complex judgments and understand abstract concepts, let alone figuring out how to pair this to an embodied robot and the control architecture needed for that…), let us look at how we have fundamentally underestimated humans’ actions during war.  In one of the most oft-utilized moves, Ron Arkin and others cite a survey given to US warfighters in 2005 after the surge strategy in Operation Iraqi Freedom.

 

Here were the findings:

  1.     Approximately 10 percent of Soldiers and Marines report mistreating noncombatants (damaged/destroyed Iraqi property when not necessary or      hit/kicked a noncombatant when not necessary). Soldiers that have high levels of anger, experience high levels of combat or those who screened positive for a mental health problem were nearly twice as likely to mistreat noncombatants as those who had low levels of anger or combat or screened negative for a mental health problem.

  2. Only 47 percent of Soldiers and 38 percent of Marines agreed that noncombatants should be treated with dignity and respect.

  3. Well over a third of Soldiers and Marines reported torture should be allowed, whether to save the life of a fellow Soldier or Marine or to obtain important information about insurgents.

  4. Some 17 percent of Soldiers and Marines agreed or strongly agreed that all noncombatants should be treated as insurgents.

  5. Just under 10 percent of Soldiers and Marines reported that their unit modifies the ROE (rules of engagement) to accomplish the mission.

  6. Some 45 percent of Soldiers and 60 percent of Marines did not agree that they would report a fellow soldier/marine if he had injured or killed an innocent noncombatant.

  7. Only 43 percent of Soldiers and 30 percent of Marines agreed they would report a unit member for unnecessarily damaging or destroying private property.

  8. Less than half of Soldiers and Marines would report a team member for engaging in unethical behavior.

  9. A third of Marines and over a quarter of Soldiers did not agree that their NCOs and Officers made it clear not to mistreat noncombatants.

  10. Although they reported receiving ethical training, 28 percent of Soldiers and 31 percent of Marines reported facing ethical situations in which they did not know how to respond.

  11. Soldiers and Marines are more likely to report engaging in the mistreatment of Iraqi noncombatants when they are angry, and are twice as likely to engage in unethical behavior in the battlefield than when they have low levels of anger.

  12. Combat experience, particularly losing a team member, was related to an increase in ethical violations.

Now, let us switch this around, for just a few examples due to space constraints.

  • Approximately 90% of Soldiers and Marines report respecting noncombatants by not damaging/destroying  Iraqi property when not necessary or hitting/kicking a noncombatant when not necessary). Soldiers that are not suffering from anger management, do not experience high levels of combat [probably repeatedly], or those did NOT screen positive for a mental health problem were not likely to mistreat noncombatants.  [note: Recall that necessity in war is an excusing condition, especially when the force is proportionate to military objectives.]
  • 53% of soldiers and marines agreed that noncombatants ought to be treated with dignity and respect.
  • Well over 60% of soldiers and marines reported that torture should never be allowed, even if that would save the life of a fellow soldier or marine or to obtain important information about insurgents. [note: This is pre-CIA torture memo and the high public backlash about torture]
  • 83% of soldiers and marines disagreed or strongly disagreed that all noncombatants should be treated as insurgents.

The results of this survey ought to make us take pause to think about how to better the operating and deploying conditions for soldiers and marines, to be sure.  However, they also do not give credit where credit is due.  We are not talking about an overwhelming or simple majority of warfighters reporting unethical behavior – or even thinking about such behavior – during one of the highest fatality operations for ground troops in decades.  Faced with insurgents using the civilian population, the vast majority of troops did not in fact act unethically, and when they were unsure, they may have been so for the simple reason that there was a moral dilemma that even a human could not resolve.  What is more, as all good political scientists know, unobservables are tricky to measure.  What of questions not asked, or of actions not taken?  How are we to put those into this population?

Finally, there is the mind-boggling jump made in the arguments for robot soldiers acting with greater compliance to ethical principles to the claim that they will not rape.  As my friend and colleague Charli Carpenter addressed this years ago, I will not belabor the point.  All I wish to ask is why is ethics in war tied immediately to the sexual violation of a person’s body?   Either it is because, as Carpenter notes, that “rape victims’ bodies were simply being referenced as a tool of political argument to justify weapons development” or that war is a practice that uses gender and sex to absolve itself of the violence that it imposes on others? If one can “protect” one’s women from brutish soldiers from rape, never mind that rape is a political and military tactic or that women can rape men too, then making war “cleaner” and more “ethical” seems like a win-win.  Who wants to make war dirtier and more unethical? No one, and thus the logic of the argument is actually as vacuous as it is specious.

To bring us back to reality, then, the notion of robot soldiers being more ethical in war belongs in the realm of science fiction.  For until and unless humans can behave more ethically in war, the technology will not save us.  AI and machine learning merely picks out patterns in vast amounts of data to learn how to do things, and if we are providing those data points, it will merely learn to reflect our own short-comings.  Thus if it sees that rape as a political-military tool is effective as a strategy for punishment and unit cohesion, it may too engage in the same behavior.  The only difference is that it will not feel the least bit of remorse about doing it, for, as we promised, it lacks the ability to feel such emotions and it is a computer that will always act as directed.