Autonomous Arms Control?

15 May 2012, 1824 EDT

Now that this academic year’s loose ends are wrapped up, it is time to refocus attention on the important topic of norm development around autonomous weapons. Fortunately for my case study there is much developing on this front, in addition to developments in adjacent but (I argue) distinct issue areas like drones.

Lawfare Blog has been putting forth considerable new materials related to this debate in recent days, notably a working paper series. I’ll probably react to a number of these papers as I find time to read them; but for now I’ll note another new paper Ken Anderson and Matthew Waxman here. Anderson and Waxman distinguish (helpfully I believe) between legal and moral objections to AWS and argue (correctly I believe)  that norm-building efforts should focus on law – namely whether or not these weapons meet the distinction and proportionality principles.
Two minor quibbles though.
Anderson and Waxman’s discussion of distinction focuses only on whether such weapons could someday distinguish between civilians and combatants and not on that other element of the law on “indiscriminate attacks” regarding whether a given weapon is “controllable.” While codified law uses the language of “limiting” a weapon’s effects, this provision has been interpreted in customary terms as relating to the broader concept of “control,” suggesting a normative aversion to “uncontrollable” weapons. 
While it’s conceivable such weapons could in the future meet the distinction principle in the sense of distinguishing lawful and unlawful targets, it strikes me that fully autonomous weapons would be by their nature uncontrollable (in fact the ethical opprobrium against such weapons being ‘unleashed’ is, I think, a big part of what concerns anti-AWS advocates on moral grounds). In terms of norm development, it will be interesting to see whether the logical result of both AWS development and anti-AWS advocacy is to create a clearer understanding of the relationship between “controllability” and “ability to distinguish.” 

The second quibble is that Anderson and Waxman seriously downplay the potential for a treaty-making process on this issue:

Limitations on autonomous military technologies, although quite
likely to find wide superficial acceptance among non-fighting states and some nongovernmental groups and actors, will have little traction. Even states and groups inclined to support treaty prohibitions or limitations will find it difficult to reach agreement on scope or definitions because lethal autonomy will be introduced incrementally – as battlefield machines become smarter and faster, and the real-time human role in controlling them gradually recedes, agreeing on what constitutes a prohibited autonomous weapon will be unattainable. Moreover, there are serious humanitarian risks to prohibition, given the possibility that autonomous weapons systems could in the long run be more discriminating and ethically preferable to alternatives; blanket prohibition precludes the possibility of such benefits. And, of course, there are the general challenges of compliance – the collective action problems of failure and defection that afflict all such treaty regimes.

I find this argument by assertion unconvincing. First, the likelihood that AWS will gain traction with states seems no logically weaker than that of blinding lasers or landmines – both weapons with military utility around which states were ultimately convinced by NGOs to establish treaty rules. The point on definitional ambiguities is well-taken, but this kind of problem afflicts all treaty-making processes and doesn’t necessarily doom them. In fact anti-AWS advocates are already grappling with precisely where and how they would draw that red line: we can expect norm development in this area to occur when they work out the messaging. The third point about humanitarian costs of norm development is a normative statement, not a prediction about the likelihood of norm development. Indeed, as Dick Price pointed out long ago, the history of weapons bans does not necessarily correspond in all cases to objective humanitarian reasoning. As for the challenges of compliance, this can’t be invoked to predict we’ll see no treaty since all treaties suffer from this problem but some do get negotiated anyway.

As for such constraints being “likely” to find support among NGOs, it is not only likely, it is already happening. More on that to follow.