A common argument made in favor of the use of robotics to deliver (lethal) force is that the violence used is mediated in such a way that it naturally de-escalates a situation. In some versions, this is due to the fact that the “robot doesn’t feel emotions,” and so is not subject to fear or anger. In other strands, the argument is that due to distance in time and space, human operators are able to take in more information and make better judgments, including to use less than lethal or nonlethal force. These debates have, up until now, mostly occurred with regards to armed conflict. However, with the Dallas police chief’s decision to use a bomb disposal robot to deliver lethal force to the Dallas gunman, we are now at a new dimension of this discussion: domestic policing.
Now, I am not privy to all of the details of the Dallas police force, nor am I going to argue that the decision to use lethal force against Micah Johnson was not justified. The ethics of self- and other-defense would argue that the Mr. Johnson’s actions and continued posturing of a lethal and imminent threat meant that officers were justified in using lethal force to protect themselves and the wider community. Moreover, state and federal law allows officers to use “reasonable” amounts of force, and not merely the minimal amount of force to carry out their duties. Thus I am not going to argue the ethics or the legality of the use of a robot to deliver a lethal blast to an imminent threat.
What is of concern, however, is how the arguments used in favor of increased use of robotics in situations of policing (or war) fail to take into consideration psychological and empirical facts. If we take these into account, what we might glean is that the trend actually goes in the other direction: that the availability and use of robotics may actually escalate the level of force used by officers.
Why is this so? Let us take the latter problem first. When do we use robots and what kind are they? First, as to the kind of robots under discussion, we are talking about tele-operated robotics. That is, robots that are being guided by a human operator. The human is “there” the whole time, and can see from the robot’s cameras or sensors. Second, when thinking about the use of robots in (at least) domestic policing, they have up until now been used in nonlethal roles. This is not to say that the situations in which they were used were not crisis situations, that may or may not have had lethal implications, such as hostage negotiations, bomb disposal or SWAT exercises. But the robots did not deliver lethal force.
Thus when we consider the use of robots to deliver lethal force, we should put this into a particular context. The situation is likely to be quite stressful, and the robot is not operating autonomously. Consequently, we need to be careful when making statements that using robots to deliver lethal force is better because the robot doesn’t feel fear. The robot doesn’t feel fear, the human does, and the human operator is still on the other end, embedded in a stressful situation. In no way does the robot’s presence mean that the situation will de-escalate. The robot becomes one more means for a human to use in the situation.
If we know that humans are still operating the robot, and humans are still subject to human emotions, then how do psychological factors play into the argument for increased use of robotics? Does the distance and robotic platform mediate the human’s experience in such a way to enable better judgment and more limited amounts of force?
That very much depends on what outcome one is looking to achieve. As Josh Greene points out in his work in experimental psychology on the Trolley Problem, human deliberation is governed by two different processes. The first one is a kind of “automatic” mode, where our visceral intuitive reaction to pushing someone off a bridge to save five workmen from a runaway trolley tells us that we would not kill the one to save the five. The second one is a more deliberative mode, and this tells us that if we were able to hit a switch to kill one person to save five, we are much more likely to do this. Greene’s findings suggest that interpersonal relations of violent action inhibit individuals from both being violent but also from saving others from violence. Contrarily, the more one is removed in time or space is from “doing the deed” oneself, the more likely one is to use some form of violence in Trolley Problems. For example, when asked if one had to physically push someone off the bridge, 31% said yes, where if one had to flip the switch 59% said yes. Compare this to 63% agreeing to kill the one to save the five when the footbridge was remote and included a switch. The propensity to kill the one to save the five doubles when the killing is done remotely.
Now, very much depends on the moral judgment about the killing one to save five argument. If one is a consequentialist, then saving five is the morally right thing to do. If one is a deontologist, then we may say that there are other concerns at play about the rights of the one over the lives of the five, and the permissibility, for example, of using the one as a mere means. In short, enabling “better judgment” about how to use force is dependent upon one’s prior commitments to a moral framework about individual rights and the common good.
To put this back into the context of domestic policing and robotics, we can see that the use of a tele-operated robot is somewhat akin to a remote trolley problem. The distance, perhaps cognitive or physical, changes the way the human brain judges the moral permissibility of using force. Greene’s experiments bear this out. Now the question is how this kind of fact about human cognition changes when we also introduce elements of stress in crisis situations. Is there an interactive effect where humans are more likely to use distancing technology, such as robots, in more lethal ways because of cognitive perceptions about harm when they are under stress? If yes, then the logic about the use of robotics de-escalating a situation is faulty. Rather, what we would see is that the availability of the distancing technology enables an escalatory response.
Domestic law enforcement of course must protect themselves and the populations they serve, and this does not change with or without the presence of robots. However, we must make sure that we remember that the use of force in domestic policing is not supposed to be as permissive and expansive as it is in war. Lethality is the ultimate last resort because with over-abuse it can undercut the rule of law. For these reasons, we ought to tread carefully when considering using robotics to deliver lethal force in domestic policing, for there may exist a cognitive bias to escalate the situation to one of lethal response rather than de-escalating to less-than lethal or nonlethal responses to enable capture.
At the same time, removing the robot’s operator from the scene removes the pressure to shoot before being shot, which ought to have a de-escalatory effect. The individual in the Trolley Problem does not face a personal threat.
If a robot can be used to deliver a lethal attack, in most normal situations it can also deliver non-lethal attacks (e.g., tear gas, knockout gas, flashbangs, etc.). In this case, the troubling issue is that we can use robots in situations in a way that makes the attempt at non-lethal force without innocent casualties possible, but instead continue to use lethal force.