Distance and Death: Lethal Autonomous Weapons and Force Protection

10 April 2016, 0544 EDT

In 1941 Heinrich Himmler, one of the most notorious war criminals and mass murders, was faced with an unexpected problem: he could not keep using SS soldiers to murder the Jewish population because the SS soldiers were  breaking psychologically.   As August Becker, a member of the Nazi gas-vans, recalls,

“Himmler wanted to deploy people who had become available as a result of the suspension of the euthanasia programme, and who, like me, were specialists in extermination by gassing, for the large-scale gassing operations in the East which were just beginning. The reason for this was that the men in charge of the Einsatzgruppen [SS] in the East were increasingly complaining that the firing squads could not cope with the psychological and moral stress of the mass shootings indefinitely. I know that a number of members of these squads were themselves committed to mental asylums and for this reason a new and better method of killing had to be found.”

A “new and better method” would enable the Nazi regime to continue on its murderous and genocidal rampage without needing to psychologically break its own people.

The initial result was a plan to use carbon monoxide gas to kill Jewish prisoners of war, but this was soon abandoned in favor of using Xyklon B—a delicing agent that emitted lethal gasses—instead. Xyklon B was cheaper and faster than piping in CO2. It was Karl Fritzsche, a creative and sadistic SS officer at the Auschwitz concentration camp, who came up with the idea and tested it on prisoners while his boss, Rudolf Höss, was out of town. Upon returning, Höss wrote in his dairy that a “calm came over him” because his SS troops would be in better morale, having been “spared [the] bloodbaths” of mass shootings.   Now, the prisoners could be “exterminated” without having to see it, and the bodies would be quickly cremated, sparing SS officers the mental anguish of having to see what they did.

Over seventy years later, the lesson has failed to sink in completely. Despite risk or distance, it does not seem to matter when one can see, witness, and reflect on the killing one is perpetrating. For example, the 2011 Air Force mental-health survey of 600 combat drone operators found that “forty-two percent of drone crews reported moderate to high stress, and 20 percent reported emotional exhaustion or burnout.” The explanation was partly due “existential conflict.” Colonel Kent McDonald, one of the study’s authors, thought that this was due to the fact that the drone pilots were “under a great amount of stress because of all this video feed.” A later study found that drone operators “suffered from the same levels of depression, anxiety, PTSD, alcohol abuse, and suicidal ideation as traditional combat aircrews.”

Moreover, anecdotal evidence from former drone pilots suggests that because they can “see” and “witness,” not only the act of killing, but what came before and after, sometimes for hours or days on end, that they are psychologically more aware of their actions than manned fighters who do not actually “see” the effects of dropping their ordinance. “To mitigate these effects,” Mathew Power explains, “researchers have proposed creating a Siri-like user interface, a virtual copilot that anthropomorphizes the drone and lets crews shunt off the blame for whatever happens. [For example,] Siri, have those people killed.

Except unlike the Nazi’s plan with Xyklon B, a drone operator would still be present and still see what he or she was complicit in doing. Even if the operator was no longer “pulling the trigger” but giving the order to fire, the act and its effects are still right in front of their faces potentially making them subject to these psychological stresses. That is, until we are able to take the human out of the loop entirely.

One of the oft-espoused virtues of autonomous weapons systems (AWS) is that they enable a military to use lethal force against an adversary without having to risk a human combatant’s life in the process. As Bob Work, undersecretary of defense for the US, explains, “I’m telling you right now, 10 years from now if the first person through a breach isn’t a friggin’ robot, shame on us.” To be sure, Work was speaking about human-robot “teaming,” but the reality is that he is also quite fond of the idea of permitting autonomous weapons to do the fighting in a variety of circumstances, presumable without humans anywhere near the systems. Force protection is one of the primary reasons to pursue AWS, and it is is a strong and morally important one.   Indeed, many would find it hard to argue with its normative force. For there is nothing in international law nor just war theory that requires militaries to put themselves at greater risk or even equal risk to fight their adversaries. Fair fights have never been a moral obligation, but saving one’s soldiers obviously is.

While I am not arguing that the Nazi’s decision to pursue “more humane” methods of killing [more “humane” for their own soldiers-not for the Jewish people] techniques for murder was a morally permissible act, nor am I suggesting that the acts of US drone operators are morally equivalent, but they are both tales where individuals tasked with killing—in a situation where they themselves are not threatened by the person they are killing and have to witness and see what they are doing—are psychologically harmed. The two cases are dissimilar in many ways, but in comparison, they also have the same result: though they were both shielded from bodily harm, they were still subject to the moral hazard and mental anguish of killing. (Whether this is justified anguish, desert, or harm is a different matter.)

It would seem to me, then, to claim that if a military truly wanted to engage in “force protection,” where that protection required them to guard against physical and mental harm, the result would be to not merely increase the distance from the battlefield, but to remove one’s warfighters from witnessing any of the horrors of war. It is not enough to merely say “Siri, kill those people,” one ought not to see the effects of one’s orders either. Autonomous weapons, then, are the ultimate force protector because they shield warfighters from not merely being the first one in through the breach, but permit humans from being anywhere near or seeing a breach.

War is costly, both in blood and treasure, but also I would suggest in cognitive terms too. Just because one is not shot or shot at does not mean that one would not be harmed by one’s own actions—even if those actions are temporally or geographically distant. Addressing this fact then would seem to require further cognitive distance or removing of the human entirely.

However, this appears to be an odd and seemingly immoral choice to take. While international law and just war theory does not require that soldiers fight equally or fairly, there does seem to be some assumption that war still requires costs—from both sides.    For otherwise it seems too likely that without a cost, militaries will see the opportunity to use this capability to pursue policies and objectives that they otherwise would have foregone. The cost, even if one might be fighting a just war, acts as a break against misuse in potentially unjust or future wars.  Given that most wars are questionably just, and most likely unjust on both sides, there is further reason to caution about using the strong but worrisome justification of force protection.   Eliminating harm from one’s own forces is a laudable goal, but taken to the extreme it may result in worse outcomes and more harm

__________________________________________________________________________________________________________

Disclosure: Both of the examples cited here are of course contentious cases in their own rights.   I am not in any way equating the two morally or politically, but merely as a methodological tool for comparison.  Moreover, I am in no way suggesting that force protection is not a goal to pursue.  Rather, I am cautioning about how that justification leads to either the removal of the warfighter completely or to the decision to not engage in war.  Given the escalatory rhetoric surrounding the 3rd Offset strategy, autonomous weapons, and the international community’s response to all of this, it is unlikely that the latter course of action would be preferred.   On Monday the state parties to the Convention on Conventional Weapons (CCW) will meet in Geneva at the United Nations to discuss a possible additional protocol to the CCW banning lethal autonomous weapons.