This is a guest post from Michael C. Horowitz (@mchorowitz), Associate Professor of Political Science and Associate Director of Perry World House, University of Pennsylvania
Last week, Charli Carpenter published an important piece advancing the conversation about public attitudes, public conscience, and autonomous weapons. Her post critiqued my recent article in Research & Politics on public opinion and autonomous weapons. As a former Duck contributor, I am excited to return and further the dialogue (for a longer version of this post that contains more detailed responses to some other parts of Carpenter’s piece, go here).
Carpenter’s path-breaking survey on public opinion and autonomous weapons was the departure point for my Research & Politics article. She persuasively showed widespread public opposition to autonomous weapon systems (AWS), in principle. My research builds on hers, as I seek to find out how likely political frames would affect public opposition to autonomous weapons. Carpenter and I actually agree on a lot about public attitudes concerning autonomous weapons, and even about what my own research shows, though we have some disagreements about survey methodology.
My paper demonstrates what Carpenter says it shows: that public attitudes about AWS will vary depending on the “primes” or “frames” used to provide context for the discussion. In particular, when AWS are presented as more necessary, defined in several different ways, public support rises. This is consistent with decades of political science research on how elite framing of issues shapes public attitudes. It is different from measuring public attitudes in a vacuum, and I hope that my original paper did enough to highlight this distinction. As I state on page 1, the article’s takeaway is that “public opposition to autonomous weapons is contextual.” (additional quotes on pages 2 and 7 say similar things). This type of inquiry is straight from the standard playbook on public opinion survey and field experiments, as research by Jamie Druckman, Diana Mutz, and others shows.
The use of experimental conditions to better understand important humanitarian topics includes studies on drones, climate change, same-sex marriage, race and tolerance, poverty, immigration, health care, and torture, among other topics (the links above go to research that uses experimental conditions to evaluate public attitudes; the torture pieces even build on the Gronke et al. research Carpenter cites). My results similarly show that public attitudes concerning autonomous weapons vary based on the frames used in the discussion.
I then conclude that “More directly, from a policy perspective, these results suggest that it is too early to argue that AWS violate the public conscience provision of the Martens Clause because of public opposition.” The logic behind my statement is that, since presenting AWS in a positive light leads to significantly more public support for their development, it means public opinion on this topic is still malleable. If context shapes attitudes in the case of autonomous weapons, as it does in so many other areas, then we need to be careful when asserting what public attitudes “are.” More research is necessary.
Thus, my paper did not make the claim that Carpenter objects most to – that I was directly measuring the public conscience. I was instead testing whether context shapes attitudes in the case of autonomous weapons by comparing experimental frames to an unframed condition, as scholars do in many other areas. What Carpenter and I disagree about most, it seems, is actually survey methodology.
More broadly, highlighting context in surveys of public attitudes is important because, in the real world, as Baum and Potter show, public opinion on foreign policy issues does not exist in a vacuum. Individual citizens tend to know little about (and pay little attention to) foreign policy issues. Cleanly measuring publicly attitudes is also tricky, especially on an issue such as autonomous weapons, which are not a hot button issue for the American public. Survey respondents are likely to have low degrees of information, meaning their opinions may represent what John Zaller calls non-attitudes. How elites (whether government, NGO, or other) raise and discuss issues, due to their informational advantage over the public, shapes public attitudes. Public attitudes are essentially formed in a process that involves framing.
In an environment where survey respondents have low levels of information, one can measure the public’s instinctive reaction with a “clean” survey and get a snapshot of public attitudes. However, public attitudes on many topics also change over time due to exposure or information (for example, same-sex marriage). Thus, experimental conditions that reflect potential elite argumentation show how real world debates may influence public attitudes (for another example on drones, see this). This is also one reason why my paper tested how the level of information respondents have about autonomous weapons influences their opinions (see Figure 2).
Moreover, there are limits to clean surveys with open-ended prompts, the type Carpenter prefers, as well. Nearly all survey questions, including Carpenter’s, also bias respondents in their framing. And asking people about autonomous weapons, for example, means the responses conflate survey respondent views on autonomy with survey respondent views on weapons. Someone that is opposed to all weapons, for example, might oppose the development of autonomous weapons not because they are autonomous, but because they are weapons.
Additionally, since many people have unformed opinions, open-ended prompts can lead to response instability. A lack of eloquence or sufficient frame of reference to answer an open-ended question can complicate the results. In particular, moral attitudes are often unconscious, making them particularly hard to articulate. Methodological pluralism is therefore helpful to get multiple takes on public attitudes.
It is also important to note that my study was not, as Carpenter states, “biased in favor of killer robots” because the experimental conditions are “drawn purely from pro-AWS hypotheticals. The experimental conditions focus on issues such as protecting US troops and military effectiveness, with some respondents, for example, told that AWS are more effective than other weapons, and others told they are not more effective (all question wording, data, and replication information is available here).
It is true that I did not add a condition where AWS were definitively worse than other weapons, but ironically, it is because Carpenter’s research persuasively showed baseline public opposition to AWS. I strongly suspect that conditions where AWS were definitively worse than other weapons would increase public opposition (though it is worth testing in future research). Thus, any bias was due to the strength of her initial survey.
Given Carpenter’s finding of public opposition to AWS, my experimental design attempted to test the scope of that opposition. After all, if frames where AWS were more effective than other weapons or would protect US troops did not lead to greater public support for AWS, that would be especially strong evidence about the level of public opposition. Instead, the results show that context matters.
The next step could be follow up research looking at how pro and anti AWS moral, humanitarian, and military frames influence public attitudes about AWS (as well as more representative surveys outside the United States, building on research by the Open Roboethics Initiative, and surveys comparing attitudes about AWS to other weapons). Our understanding of public opinion in this area is still limited. Combined with the open-ended survey prompts that Carpenter prefers, this could further build knowledge. This is consistent with my conclusion that “[M]ore research is necessary to scope out how the context influences public support for and opposition to autonomous weapons.”
Conclusion
Carpenter concludes that “Horowitz’ own data on autonomous weapons proves his assertion wrong.” Since I only claim is that public opinion is context-dependent on the topic of autonomous weapons, and that Carpenter agrees with that point in another part of her post, I actually think we agree more than we disagree. Interestingly, my baseline results replicate, in part, Carpenter’s initial findings of baseline opposition to AWS, as she states. This should increase confidence in the experimental results, because it means that, even though the study utilized a convenience sample, not a representative population sample, it generated similar results to Carpenter’s survey.
My paper does not support or oppose a ban on autonomous weapons. It is social science research – survey results that attempt to better understand the conditions in which the very opposition Carpenter identifies may or may not hold.
If policymakers can present AWS in ways that make them seem more (or less) attractive and by doing so impact public support, this has policy and empirical social science implications. The bar for speaking for humanity by claiming any weapon violates (or does not violate) the global public conscience should be high. As stated above, if public opinion was not malleable even when respondents were presented with information suggesting the effectiveness and utility of AWS, it would say something unique about the level of public opposition. My results show, however, that as with many other issues, public opinion on AWS depends on the way the issue is presented.
For those who support a ban on “killer robots,” these results also have implications. There may be nascent public opposition to autonomous weapons, but political reality is fragile. Public conscience, of which public attitudes are a part (as Peter Asaro argues), might not be manifest in the moment societies decide whether to develop or use autonomous weapons, depending on the scenario. If the goal is to “ban killer robots,” therefore, my results show it is useful to better understand the interaction between elite framing and public attitudes.
Finally, Carpenter’s piece also raises crucial methodological questions. Do normative beliefs exist in a vacuum, and what is the best way to measure them? Carpenter prefers “clean” surveys with open-ended prompts for respondents to report why they have the attitudes they have. Yet open-ended surveys are not perfect, as I describe above. Moreover, as autonomous weapons become more politically salient, elite discourse will shape viewpoints more. It therefore makes sense to also understand the way different approaches to explaining the issue of autonomous weapon systems may influence those public attitudes. Methodological pluralism is critical, therefore, to move the discussion forward and build scholarly knowledge about public attitudes on autonomous weapons.
Michael, thanks for this thoughtful reply to my post critiquing your article. I want to be crystal clear that I am not critiquing the use of survey experiments per se. My critique is in the way that you inferred from your results that “it is too early to argue that AWS violate the public conscience provision of the Martens Clause because of public opposition.” That may be true for other reasons – for example, we need global survey data not just US data to get a firmer picture, and you are right that public opinion polls are only one blunt instrument for getting at the ‘public conscience’ – but it is NOT the case that your new finding after priming your (also US-based public opinion poll) respondents says anything whatsoever about the public conscience or has any bearing on my original result.
The key reason for this is that the whole concept of the “public conscience” comes from international humanitarian law, whose key underlying principles ask military commanders and political elites to weigh military necessity against humanitarian concerns, and where the law leaves grey areas, to refer back to the “dictates of the public conscience” – conscience being not just what we think is expedient, but what we think is moral and appropriate.
In my survey, using qualitative analysis of open-ended responses, we were able to measure exactly how people were weighing ethical v. interest-based concerns, how people who fell on opposite sides of the debate weighed them differently, and where people come down when they’re “not sure.” Your survey asked people to think only in terms of short-term military necessity (a kind of killer robot version of the ticking-time-bomb scenario) and ignore both long term interests (like avoiding a gradual dehumanization of war) and short-term humanitarian effects (like collateral damage). So you weren’t measuring the public conscience so much as asking people to turn their consciences off long enough to answer your survey questions. And that’s why it’s inaccurate to suggest that your findings have any bearing on mine.
A couple of thoughts, prefaced by the disclaimer that I’m not an international lawyer nor an expert on international humanitarian law.
First, if one glances at the piece on the ICRC website about the Marten’s Clause that Prof. Carpenter linked in the earlier post, it’s fairly clear there’s little consensus about how to interpret the Clause or what the reference to “the public conscience” is supposed to mean. One reading would suggest that the Clause is simply a way of saying that “general principles of int’l law” apply to actions or behavior not specifically mentioned in the 1899 Hague Convention; the Clause refers to “general principles [of int’l law]…as they result from” (emphasis added) several factors, one of which is “the dictates of the public conscience.” What the “public conscience” means in this context is not altogether clear and whether it can be measured by surveys of public opinion, regardless of how they are worded, is perhaps doubtful, or at least arguable. Maybe it’s better measured by looking at the opinions of writers on int’l law or ‘publicists’, to use an old word. There’s a certain circularity there, to be sure, but probably no more than can be found in other areas of customary int’l law.
Second, the old standard for humanitarian intervention (which was always a somewhat controversial notion) in customary international law was that it is appropriate or justified in response to acts that “shock the conscience.” Whose conscience? How measured? The same kinds of problems arise here. Of course there are some acts that shock every (‘normal’) person’s conscience, but in other cases there are often circumstances that make application of this old standard somewhat difficult, even when one looks at state practice, the writings of courts and publicists, and the other things that are supposed to determine customary intl law. Did the drafters of the basic R2P documents, from 2005 if memory serves, make any reference to the “shock the conscience” standard? Or did they decide that, so to speak, it was more trouble than it was worth? (That’s not a rhetorical but an actual question to which I don’t know the answer and I’m not taking the time to look it up.)
In sum, I sort of wonder whether the authors of the Human Rights Watch report were wise, tactically, to refer to the Marten’s Clause. On the other hand, since the law w/r/t autonomous weapons is, from what I gather, fairly undeveloped, I can understand why the HRW authors thought they had to reach for whatever arguments seemed available.
I see now, glancing at the earlier post, that Prof. Carpenter also linked to a piece from 2000 by Theodor Meron in the Am J Intl Law about the Martens Clause. I suppose I shd have read that before writing the comment above. (However I am fairly certain that the # of people who care about my views on this matter (or indeed on any matter, but put that aside) is minuscule.)