One of the more depressing elements in the narrative at the CCWUN Experts’ Meeting this week has been the argument, repeated by a number of autonomous weapons proponents both in plenary and discussion, that an advantage of such weapons is the following: unlike human soldiers, they would never commit rape.
This is but a new twist on a broader argument most prominently made by Georgia Tech Professor Ronald Arkin in his book Governing Lethal Behavior in Autonomous Robots, but shared by many AWS proponents. The argument is that autonomous weapons might not just be good for national security, they would be good for human security, because they would do better than human soldiers at complying with war law. This will allegedly occur through a combination of situational judgment, ability to distinguish civilians from combatants, and good old-fashioned restraint. So the argument goes, a machine would never get tired, or go berserk when its buddy died, or make lethal mistakes due to anger, jealousy, or malice.
Certainly a robot warrior would never rape.
Below I will first address a number of problems with the rape argument, and then show how these problems cast doubt on the broader (and I would say over-optimistic) prediction of machine just-warrior-hood.
Let’s start with rape. First of all, what definition are these delegates using that leads them to think that rape could not be committed by an autonomous machine? Most likely, I figure, one that locates the act of and propensity for rape in biological male sexual impulses. But here is the actual international definition of rape, which is codified in the 1998 Rome Statute of the International Criminal Court:
“The perpetrator invaded the body of a person by conduct resulting in penetration, however slight, of any part of the body of the victim or of the perpetrator with a sexual organ, or of the anal or genital opening of the victim with any object or any other part of the body. The invasion was committed by force, or by threat of force or coercion, such as that caused by fear of violence, duress, detention, psychological oppression or abuse of power, against such person or another person, or by taking advantage of a coercive environment, or the invasion was committed against a person incapable of giving genuine consent.”
If I can imagine a machine inflicting lethal violence on a human being through projectile, I have no problem whatsoever imagining a machine forcibly penetrating the body of a male or female human being in ways consistent with this definition. Of course, a machine programmed to kill according to the algorithm Professor Ron Arkin has in mind probably wouldn’t do that as a matter of course. But who is to say that a machine couldn’t be programmed to rape instead of kill if some nefarious dictator had his hands on the most advanced model as a result of a CCW-legitimated robotic arms race and decided to use these as tools of terror instead of human security?
Underlying these techno-optimists’ thinking is an important fallacy: they assume that war rape is a crime committed opportunistically by soldiers, often untrained and lawless rebel groups, rather than ordered by state. Yet this is one of many “myths” of wartime sexual violence identified in a recent United States Institute of Peace Report:
Several recent studies have found that state armed groups are far more likely than rebel groups to be reported as perpetrators of rape and other sexual violence. The recent PRIO study of African conflicts found that, between 2000 and 2009, armed state actors were more likely to be reported as perpetrators than either rebel groups or pro-government militias. Between 1989 and 1999, state forces and other armed actors were equally likely to be reported as perpetrators.
If conflict-related rape is commonly perpetrated by state militaries, then the assumption may be that these crimes are the opportunistic behavior of bad apples rather than strategic policy: therefore the key problem for states is preventing soldiers from raping out of emotion or lust and that by creating emotionless, lustless soldiers one will solve the problem. But if so that’s also an error: history shows states often order rape. During World War II, Korean women were forcibly enslaved and raped as part of Japan’s “comfort women” policy. During the genocidal violence that accompanied Bangladesh’s secession from Pakistan, tens of thousands of women were raped on orders of the West Pakistan military. In Bosnia-Herzegovina, male soldiers were ordered to rape on pain of castration and death. In Peru and Guatemala, a metanalysis by Michelle Lieby shows a majority of rapes were carried out by the state, many systematically.
These are not isolated anecdotes. Harvard University Professor Dara Cohen has collected quantitative data on rape in civil wars cross-nationally between 1980-2009. She found that of the 86 major civil wars during this period, 67 conflicts have reports of state perpetrators of rape. According to Cohen, her dataset also show that 85% of those conflicts include reports of rape in detention, an act which implicates states – the same states that would have autonomous weapons in their arsenals if proponents have their way. In an email exchange this evening with Professor Cohen on the idea that autonomous weapons could “never rape” she replied:
“What I can say is that the vast majority of reported wartime rape is perpetrated by states, and a lot of that seems to be in the context of detention and torture. If sexualized torture is a common tool of states for interrogation and punishment during periods of war, then the “human-ness” of the torturer is moot. A world with robot sexual torturers is actually a pretty scary one: robots also lack morality, so one could imagine an extremely brutal machine that might have the ability to perform horrible tortures that most humans would find repugnant or unbearable.”
The reason that none of this occurs to delegates could be good old fashioned gender non-awareness: how many male experts in international disarmament law are also experts in conflict-related sexual violence?* But it could also be a function of the undue techno-optimism that motivates not just the rape argument but all arguments about AI humanitarianism. Simply put, when AWS proponents make the argument that AWS would commit fewer war crimes than human soldiers, they seem to mean fewer war crimes than the average human soldier whose state wants her not to commit war crimes.** They overlook the number of war crimes that occur because human soldiers fail to disobey unlawful orders from the state.
It stands to reason that robotic soldiers would be even less likely to disobey orders to kill noncombatants, torture, rape, etc – in fact if they are a product of their programming it is plausible they would be almost certain to carry out unlawful orders to commit war crimes, genocide, or crimes against humanity, with no conscience, empathy or independent moral judgment to get in the way. Indeed this is a key counter-argument made by Human Rights Watch in its new report out today rebutting proponents’ claims.
“Due to their lack of emotion, fully autonomous weapons could be the perfect tools for leaders who seek to oppress their own people or to attack civilians in enemy countries. Even the most hardened troops can eventually turn on their leader if ordered to fire on their own people or to commit war crimes. An abusive leader who can resort to fully autonomous weapons would be free of the fear that armed forces would resist being deployed against certain targets.”
Of course both proponents and opponents of AWS are trading in hypotheticals. But in a counter-factual argument the best predictor of human practice in future is the past.
In short the techno-humanitarian argument rests on a logical paradox: humans are less likely than robots to comply with the law, but it is these untrustworthy humans who would be the principals in any scenario where robotic agents were unleashed to carry out the law. So even if Ron Arkin is right that a humanitarian government could create the perfect humanitarian robotic soldier,*** it is equally true that a dictator could create the perfect tool of repression. Robots could be programmed as just warriors, and they could as easily be programmed to kill, to restrain, to torture and yes to rape. Saying otherwise blithely not only miscommunicates what rape is and how it occurs, it wantonly mis-states what we do know about the determinants of war crimes. And perhaps most of all it encourages us to make policy based on a naively utopian expectation, one unsupported by any empirical evidence and dramatically at odds with the history of organized warfare.
________________________________
*Indeed this is another point made to me in hallways by female participants who were offended by the rape references: it’s not as these men are champions of sexual violence reduction across the board, they said, or knew anything about or were involved in any way in gender violence initiatives. Rather, it seemed to many women present that rape victims’ bodies were simply being referenced as a tool of political argument to justify weapons development. This is particularly egregious given the lack of gender representation on the panels, but that’s a subject for another post.
**I am not even sure that this argument holds up well, but that is the subject of a different post.
***I find this argument highly suspect as well, but again that is the subject of a different post. Stay tuned!
Charli, I agree with you that the whole “robot rape” angle is creepy as hell, but – really, – coming from the crew that came up with the term “killer robots” – (OMG SO CUTE OMG HASHTAG RETWEET!), do you really have any right to be suprised that the conversation is going this way? The term that you, whatever campaign is there and HRW are all using demeans the very important discussion you are trying to have. “Robot rape” talk should stop, but quite frankly, the whole debate needs some serious maturity.
Stephanie:
a) Aside from their campaign name which was chosen for succinctness and accessibility, campaigners are primarily using the term “fully autonomous weapons” actually. The media uses the term killer robots, and I use it as well when I blog or post on FB because frankly the term “lethal autonomous weapons systems” or “fully autonomous weapons” are unwieldy and each of them encodes certain problematic assumptions anyway. But, you’ve given me a great idea for a follow-up blog post.
b) Does the use of this term render the debate simplistic? I have actually researched that question and found it doesn’t: people’s reactions to the idea of autonomous weapons don’t change significantly whether you call them autonomous weapons or killer robots, or whether you refer to the ban campaign as “campaign to ban fully auonomous weapons” or the Campaign to Ban Killer Robots. In fact I think the term killer robots actually very nicely captures the key point of debate here, which is about the importance of meaningful human control over kill decisions.
c) You made a good point on my FB page that my coverage of this event has not (yet) gone into depth at the complexity of the legal/moral discussions at play and that’s true. I’m here not primarily covering the event but instead observing as part of my fieldwork (which means my blog reflections primarily relate to my research) and participating in the proceedings by presenting my public opinion survey data. For what it’s worth, this survey data shows that the complexity of moral and ethical arguments are much more sophisticated among members of the public that oppose these weapons than among members of the public that support them. I do agree that the debate among law experts here, including proponents of these systems, is considerably more nuanced and I had planned a later post covering some of that complexity (but not all – again, I’m listening with a certain lens due to my research interests). However I think a lot of great coverage of this event is occurring on Twitter. I’m primarily following the global civil society feeds @bankillerrobots and https://twitter.com/BanKillerRobots/lists/ccw-may-2014 and #killerrobots but lots of people present beyond the NGO community are tweeting at #CCWUN and I expect various other law bloggers are covering this event as well.
d) Finally, yes, I do think everyone has the “right” to be surprised at the rape references and to be skeptical of the more specious claims of AWS proponents and the NGO community. I don’t agree that just because the NGOs chose a campaign name designed to engage the public (they are campaigners after all!) that they are responsible for the irresponsible and insensitive narratives of male “experts” on these panels.
e) I have a research project on the political signifncance (or lack thereof) of “science fiction” metaphors in this campaign’s frames and the political debate that may give you a different perspective on this, and I’ll write a post presently foregrounding some of these arguments based on what I’m seeing here at the meeting.
Thank you for your reply – but it has left me a bit confused on some points. First, you say that the conference is talking about fully autonmous weapons, but the actual humanitarian campaign is called “the campaign to stop killer robots” and (https://www.stopkillerrobots.org/). So…actually this is YOU. If it’s being used in the media, it’s because of YOU. (The Royal “you”.)
I could be wrong – maybe the media beat you to this term and the Campaign is just riding the wave of media attention – but either way, you’re perpetuating it for “engagement”.
So, you’re taking it seriously, and that’s great, but you picked a name to garner attention/kitsch. Now that you have garnered attention – “it’s all the media!” doesn’t really cut it.
I mean, it’s a totally genius way to move to engage in humanitarian attention whoring (let’s face it, you all beat out #whereareourgirls by a few years). However, I think there are certain consequences to the framing of the issue that your group was well aware of.
So then to say “oh, well, I did some research and it doesn’t even matter” – and then you then talk about the fact that it IS having an impact on how the media is using the term and its coverage… there is an issue with the math here.
There is a lowering of the conversation here that I find maddening. The people are raising stupid points seems to be a relatively predicable consequence of the framing of this issue. The way this campaign portrays its opponents is hardly conducive to focusing on the real issues.
I think I have made my points and I have stuff to do before I engage more. But have fun being an expert vs their “experts” <- scare quotes for the win, eh? That'll show 'em.
Based on the research I have done, autonomous weapons were already framed as “killer robots” by the media long before the campaign picked up the issue. I have a new working paper on how the campaign navigated issue creation in this cultural context, and I’ll post some preliminary insights shortly, but in brief: to claim the campaign created the meme rather than reacted to it is simply historically inaccurate. Also to claim that the term itself affects moral debate on AWS simply flies in the face of the public opinion data I just presented which can be looked at here:
https://www.opendemocracy.net/charli-carpenter/how-scared-are-people-of-%E2%80%9Ckiller-robots%E2%80%9D-and-why-does-it-matter
well done, Charli. The rhetoric being used to justify creating and fielding these weapons is highly gendered, and the more light we can shine on all of the illegitimate reasons for their creation, the better.
Stephanie, I don’t think I know you, but I feel a need to say something here. Charli describes some of the reasons we chose the name “Campaign to Stop Killer Robots.” And yes, we did it purposefully because weapons that can make the decision to kill human beings on their own are terrifying and people need to think about the ethical lines that would be crossed by allowing robots to kill humans. To do that you have to get people’s attention. “The Campaign to Stop Lethal Autonomous Weapons Systems” is hardly riveting to the general public. But because we chose that name has nothing to do with pro-robotic weapons systems people using the “robots won’t rape” argument. They are trying to imply the “ethical” and “humanitarian” elements of going with killing robotic systems. They’d make that argument no matter what we’d named our campaign. At a recent meeting about lethal robots, full of military and military weapons contractors, I challened the charge that we went with a “hyped up” campaign name by noting the names they have chosen for their weapons: “Reaper,” “Predator,” “Raptor,” and Lockheed Martin’s new “Terminator” weapons system. The men all “chuckled” in acknowledgement of the obvious. “Robot rape” is more than “creepy.”
At the risk of sounding like Bill Clinton, doesn’t it depend on how you define rape?
Theoretically, one could argue that the reason that wartime rape is considered so horrible is because of a number of factors:
1. The risk of being impregnated and forced to carry a child from a rival tribe or community and the resulting damage to an individuals’ reputation and lineage. (A robot could rape but not impregnate).
2. The fact that rape represents loss of ‘purity’ specifically because the victim has been gazed upon by a member of the opposite sex and also often by a member of a rival group. (Arguably, a robot does not have a gaze?)
3. The emotional aspect of rape — specifically the fact that the rapist seeks to humiliate the person they are raping and the fact that it’s an expression of rage. (The robot could rape you but it would not feel anything about doing it.)
Thus, theoretically, one could argue that being raped by a robot would be a disgusting horrible crime, but it would not have the same meaning as rape by another human because it lacks these elements. It would be more like a medical procedure performed by the Nazis than rape as it has normally been understood within religious, cultural and historical contexts..
Regarding #3, it would still be the other side doing it to the victim and choosing to do it to the victim, just with a more sophisticated non-penile tool than a broomstick or tire iron.
Regarding #2, the humiliation would be compounded by the fact that the act would likely be witnessed by a local or remote operator as well as whoever sees the resulting recordings.
Regarding #1, I don’t mean to compound the horror, but that it doesn’t seem impossible that such a machine could be designed to impregnate (even with fertilized embryos!).
The term “autonomous” seems misleading. Unless we develop generalized artificial intelligence, the robots will have to be programmed. And the programmers will have to make choices about what to tell the robots to do. Given that, robots are really no different from other “benign” tools that humans have chosen to use in nefarious ways: baseball bats, car batteries, nails, cattle prods, tire irons, glass, metal bedframes, etc., etc.
David, I’m curious: a) how would you define “generalized artificial intelligence”? and b) would you feel ok with a machine without “intelligence” tasked with the ability to autonomously target and kill human beings? If so why?
(a) I’m using a term of art here that I may not properly understand (I’m not a computer scientist), but I think the basic idea is that there’s a strong difference between an essentially sentient machine that can make decisions like humans do and a machine that’s basically operating off a human-engineered script. Now of course that distinction is problematic: we ourselves operate off all sorts of biological and instinctual (and social) “scripts,” and any robot capable of “truly” autonomous decision-making would have to be given something to work with as a starting point. Fear the paperclip maximizer.
(b) I don’t think I’ve made up my mind on this issue.* Whether the use of traditional weapons in combat such as guns and knives depends on who’s using them for what purpose. Insofar as the kinds of “autonomous” (see a) robots that we can reasonably foresee are just more capable and more remotely operated versions of the former, I think there’s a continuum of acceptability that makes it hard to say what kind of weapons system of this type is okay and what isn’t. Many people agree that biological and nuclear weapons are unacceptable in combat because they’re difficult to use discriminately, but guns and knives, which for better or worse do seem to be broadly accepted tools of war, can be used indiscriminately, too. Are robots more like the former or latter? It depends on what they’re programmed to do, by whom, to whom, and for what ends. (Note that I agreed entirely with you on the rape issue in a different comment. In this comment I was just questioning the word “autonomous.”)
* I actually had hoped that the Geneva conventions and the Kellogg-Briand pact would’ve made the whole question moot, but sadly I’m in the minority.
The creation of a robot able to rape wouldn’t go unnoticed, though. I doubt any government would invest a sizeable amount of money in a creation which would immediatly attract total international outrage… especially when human soldiers are sadly already quite ready to carry out the deed.