The New Scientist reports that Russia has announced it will deploy fully autonomous lethal robots as guards around missile bases. Several countries already have autonomous sentry technology (the US, Israel and South Korea) but [UPDATED: if this report is accurate, it suggests Russia might be the first country to announce] it will openly deploy such weapons without a human in the loop.
The reported announcement comes at an important moment in international norm-building efforts to regulate the deployment of fully autonomous weapons. At the behest of a global civil society campaign, states party to the Convention on Conventional Weapons will convene in Geneva for an “Experts Meeting” on the subject of rule-sets for fully autonomous weaponry. Given that this issue has heretofore been treated as a “pre-emptive” campaign meant to forestall the deployment of weapons before “fiction becomes fact,” it will be interesting to see how these developments in Russia – [UPDATED: who, if these reports are true, could be playing the “norm anti-preneur” in this area among many] – might affect global sentiment on the urgency of clearer rules.
By “anti-preneurship” I borrow from the intriguing title of an ISA Working Group I happened upon at the conference this year. Though no scholarship has yet come out defining the term (that I can find? please leave in comments if you know of some) I am using it to describe the efforts of powerful actors to forestall the development of or redirect intersubjective understandings of international human security norms to suit their own political interests.
Examples of this behavior in the norms literature include the Bush Administration’s effort to reinterpret / reconstitute the norm against torture: Ryder McKeown has argued this is a kind of “norm entrepreneurship” though the concept of anti-preneurship probably fits better as the idea is not necessarily to build a new norm but to shift an existing norm’s meaning to its moral opposite. Quite possibly, Russia’s invocation of the “Responsibility to Protect” doctrine might be understood this way as well: rather than merely a smoke-screen for Machiavellan irredentism, it could be an active effort by Russia to either reconstitute the R2P norm to its own liking or (more likely) weaken it through what it understands the West will interpret as a bad example – the very bad example Russia has been warning of for years in its oppositional stance to the R2P norm as a threat to the UN Charter rules.
I think another example of “anti-preneurship” may be to strategically undermine the development of new norms in the first place. Some of this behavior occurs when states or transnational stakeholders contest the development of norms, a dialectic about which Clifford Bob has written and which will no doubt occur at the Geneva Experts’ Meeting on Autonomous Weapons, as well as in every other international forum where this debate will percolate until such time as international rules are codified. But some measure of this also occurs when states attempt, through their practice, to establish a behavior as normative prior to the development of rules against it, as a mechanism for shifting the start point of the discussion.
Does such behavior then strengthen or weaken the hand of human security norm entrepreneurs? It is reasonable to hypothesize that either effect might occur. I don’t know if we have good evidence one way or another, but the Russian deployment of autonomous weapons in the lead up to this set of important international meetings may be a good test case to beg the question. And the concept of international norm “anti-preneurialism” would be a good dynamic on which scholars of norms in IR might more carefully set their analytical sights.
UPDATE: This post has been updated to reflect the uncertainty about the exact nature of Russia’s announcement re. autonomous sentry guards. As discussed in the thread, neither the New Scientist nor the links to the sources it provides contain quotations from a Russian official, so it is unclear whether Russia is in fact planning to openly deploy them for lethal purposes without a human operator, or to keep a human in the loop for targeting purposes. Therefore this is not “clearly” an example of the Russian “anti-preneurialism” I was writing about: perhaps it is, but perhaps not. While in my mind the jury is out, Mark Gubrud has a great post at his blog making the argument that Russia is in fact more or less aligned with other great powers in its approach to autonomous weapons. Check it out. If so, then although the wider point in the post may stand, Russia’s stance on AWS may not be a good example of it. Instead, it could simply be an example of the media misreporting on a political development.
What is the difference between McKeown and Payne (2001 — https://ejt.sagepub.com/content/7/1/37.short) other than specified vocabularies? What’s more, I am not sure that the deployment of terms ironically (or the recognition of such) is such a new thing. Certainly, Chomsky and other lefties have made this critique of the US use of HR language since the height of the Vietnam War.
If I may take a crack at answering that, I think the difference lies in a broader conception or interest in the mechanisms through which particular norms constraining force may be undermined. Rhetorical manoeuvres work under some conditions, but they may not work under others. Meanwhile, the story of how the US’s ‘enhanced interrogation program’ developed is one of very interesting bureaucratic political gaming and legal innovation, which links up to the rhetorical side of things in important ways. Taking all these different dynamics on-board requires a bigger ontological framework than ‘framing’ and ‘persuasion’.
Thanks for answering. The Habermasians, I think, would agree with you that rhetoric works under some conditions more successfully than others; for them to say otherwise would revoke their status as critical theorists. To follow up, let me ask a couple more questions. Is the difference between the Habermasians who see framing grounded in power and this new TAN literature the social context of the actors themselves (that includes some interplay between transnational civil society, state interaction, bureaucracies, and epistemic communities)? When is speech not utilizing framing from your account of things? I cannot see a social situation or a deployment of meaning that does not employ framing in some form.
I’m not sure that the Habermasians see framing as grounded in power. They seem to hold out for the possibility of ‘rationally-motivated consensus’. For a more power-centric view, definitely check out Krebs and Jackson 2007 (a great article). My point about broadening our view of which causal mechanisms may be operative here is, though, not about saying that there is something that operates completely outside from discourse but rather that ‘persuasion’ is only one factor among many, and that the generators of variance or change do not only lie in the outcome of arguments held in the delimited range of discursive spaces that interest the Habermasians. So I’d prefer to start from a practice-centred view in which the act of framing interacts with other kinds or aspects of transactions between actors and their environments such that a variety of causal processes can be specified and compared.
I’ve checked out Krebs and Jackson. I don’t follow your claims of Habermasians — especially the fact that you think power drops out of their analysis — critical theory by definition is a critique of power. Payne (2001) provides a critique of the role of power in framing. Whether that article accurately describes the role of power in framing is another question, but appeals to starting from practices or reality need to be better defined. I would agree that particular forms of power are not explained or examined in Habermasian approaches. My problem with TANs is twofold. First, various TAN scholars discovered problems with actors utilizing framing in transnational civil society, but then pass that off as a novel claim when other theoretical positions have for some time traced, defined, and explained the role of power in transnational politics with a variety of ontologies, contexts, and imaginaries. Second, TAN scholarship operates, at least implicitly, within a teleologically normative framework, but such a telos is eviscerated as such by TAN writers. Thus, burqa-baptist networks and ironic appropriation of norms are part of a decline narrative about transnational civil society and an affirmation of that form of politics. Empirically I feel that the decline is a problematic narrative because it dodges the question of how powerful actors ironically and cynically manipulate and produce norms about transnational and international behavior in a way that constitutes that space in the first place from day one. Avoiding this decline, then, could simply be retelling theoretical narratives about this political space, admitting it was never as much as we would like it to be and a set of politics that challenges some of the doxa of this space with a different set of practices.
Right, I’m less interested in speaking to the TAN literature largely because I want to avoid treating my subject as an example in the the rise and/or fall of norms and more of a study in the institutional politics surrounding the interpretation of existing normative structures. I think your claim that there is a teleology implicit within the way TAN scholars often have examined norm transformation is apt.
But I think there is a major difference between Krebs and Jackson, and people who follow their lead, and the Habermasians. The former are interested in how argumentation (or framing) is constitutively forceful, in that it functions to close off avenues of action in all cases. Discursive formations are thus purely instrumental, whether or not the actors who brought them into existence have a good-faith commitment to the truth of their propositional content. This is an awfully different ontology, with different kinds of subjects and possibilities, compared to the Habermasian view, which allows for the possibility of non-coercive discourse and which assigns moral significance to justificatory success under ideal conditions.
You folks might be interested in this paper:
Guzman, Sebastian. 2013. “Reasons and the Acceptance of Authoritative Speech: An Empirical Grounded Synthesis of Habermas and Bourdieu” Sociological Theory Vol. 3 No. 2 pp. 267-289.
Guzman does some social experiments and identifies the conditions under which Habermasian and Bourdieuan mechanisms work.
Thanks. I will check it out.
I accept the difference in ontologies, just not the way you describe facts and norms. I would be interested in how you lay out these claims in a larger space like a paper.
Please send me an email at simon.pratt@mail.utoronto.ca and perhaps I can show you one of my conference papers on this (nothing published yet).
Charli, I would not hasten to finger Russia as “the first country to openly deploy such weapons without a human in the loop.”
First, I do not think we have a reliable report on this. The New Scientist report says the sentrybot deployed at the missile sites is armed with a 12.7 mm machine gun and weighs 900 kg. However, it links to a YouTube video showing a fairly small bot that does not looks like it would weigh that much or that the gun is that large. The accompanying blub states that the gun is a 7.62 mm. It also states that the system is “remote controlled.”
The blurb says the system has “automatic capture function” (функцию автоматического захвата), and that “The aim is held when moving the turntable by 360 degrees.” It is not clear what this means but it sounds like the ability to remain aimed at a designated target, and possibly even the ability to acquire a target based on motion or some other criteria. Given the size of the system, it could not have very sophisticated target discrimination, and it would seem quite reckless to let it loose on a missile base in fully autonomous mode. So, I highly doubt that is what they are doing.
Other than the YouTube video, the NS report does not state what its sources are, but does state that the system is called “mobile robotic complex” which is also what the video calls it. An ITAR-TASS report from 21 March also uses this terminology. So, the video might show a smaller version but then we haven’t seen the bigger one.
Another ITAR-TASS report from 12 March, which many other reports seem to be drawn from, states that the system deployed at the missile sites has “an option of aiming weapons, tracking and hitting targets in automatic and semi-automatic control modes.” However, again, it is not fully clear what this means.
Second, even assuming this system might have a full autonomous lethality capability, it would not be the first. South Korea and Israel both have stationary sentry systems for sale and, I believe, in use, which have full autonomous capability, although in both cases they are probably normally set to human-in-loop. The US Aegis and Patriot missile defense systems also have full autonomous modes. So, Russia would not be the first to develop and deploy systems with this capability.
In general, all nations that are involved in this seem to be moving to systems which have remote control and also full autonomous capabilities. It is important to understand that in real combat, the full autonomous capabilities are likely to be switched on, meaning that the retention and even emphasis on having human-in-loop capability is just a peacetime safety which seeks to avoid the implications of full lethal autonomy while not actually negating them.
Mark, you may be right: I hope so. I think the important datapoint is not the weapon itself which no doubt has in-the-loop, on-the-loop and fully autonomous settings, but rather the statement of the Russian official about how the weapons would be used. However it’s also true that New Scientist as well as the article below provide only its interpretation of that statement, not a specific quote itself. If its interpretation is correct however I think this would make this type of deployment qualitatively different than the Samsung Techwin or Israeli sentires which I understand (correctly?) are not generally deployed in fully autonomous
mode.
https://en.ria.ru/russia/20140313/188363867/Russian-Military-to-Deploy-Security-Bots-at-Missile-Bases.html
I think you are correct that the Israeli, South Korean, and US systems which have fully autonomous capabilities are generally operated with human-in-the-loop or on-the-loop. I expect the same is true of this Russian system. I have not seen an actual quote which clearly states otherwise, and I don’t have confidence in the spin put on it in the NS report.
I am uncertain about the Korean Aegis II. From what I can find, it operates “automatically” on the DMZ as a sentry bot. This however, is in compliance with IHL as it is defensive and on a DMZ…
Mark, I think you make good points and I have updated the language in the post to make it clear that this is a report on Russia’s intentions rather than a fact until we find and analyze translation of an actual Russian official’s statement on the subject. Could well be spin, as you say. On the other hand Russia’s willingness to buck international norms would make me not at all surprised if it went ahead and deployed such sentries autonomously. If you find such a quote that clarifies the Russian position one way or the other please let us know!
Charli, there seem to be 3 or 4 distinct claims here, and I don’t think any of them are well grounded. The first is that Russia is deploying fully autonomous robots and turning them loose on missile reservations to target and kill humans autonomously. The second is that Russia is “taking the lead in a new robotic arms race” as David Hambling put it. The third is that Russia is bucking an emergent international norm against fully autonomous weapons. The fourth, and most basic, is that such an international norm is emerging, at least as would be judged by the policies and practices of the US and its allies. So, maybe these aren’t all completely distinct, but that’s why I fudged about the number.
I’ll have a post with detailed analysis up on my own blog soon.
Thanks for this link. I have added it to the update in the post.
This is actually the topic of my dissertation.
Simon, I hope to meet sometime and discuss it, or read it when it’s done. Charli
I presented some work on the targeted killing case at ISA, and I’ll be reprising it at APSA for Avery Plaw’s panel on the subject. I’d be delighted to send you a copy of my paper for the former or to have you attend the latter!
A detailed response and discussion of the Russian mobile sentry system and of Russian military robotics initiatives in general, as well as whether there is an emerging norm which Russia is flouting, can be found here.