The Campaign to Stop Killer Robots secured an important victory last week when delegates of States Parties to the Convention on Certain Conventional Weapons (CCW) voted unanimously to take up the issue as part of their work to oversee the implementation and further development of the 1980 treaty, which regulates weapons causing inhuman injuries to combatants or civilians.
The CCW process, which includes yearly meetings of state parties as well as review conferences every five years, have become a periodic forum for discussions not only of how to enforce existing rules, but of norm-building around the humanitarian effects of conventional weapons broadly. Norms around landmines, cluster munitions, blinding lasers and incendiary weapons have been incubated in this forum in the past, so it is no surprise that anti-AWS campaigners used this year’s meeting in Geneva as an opportunity to press their cause regarding the dangers of autonomous weapons.
As Matthew Bolton writes, that governments voted to “mandate” the CCW process to examine AWS means the issue is decisively on not just the humanitarian disarmament advocacy agenda but also on the international agenda. This “mandate” to consider the issue will include a threefour-day meeting next year, and a report by the Chair to the States Parties. A single veto could have prevented this international body from further consideration of the issue, and the fact that important stakeholders like Russia and the US did not forestall a larger discussion signals the salience of the issue and the tremendous agenda-setting success enjoyed by the campaign so far.
Is this “the first step towards outlawing killer robots?” Maybe. It is certainly a step in a norm-building process. What that new norm might look like depends on what happens at the Experts’ Meeting next year, the findings of the report that is to result from it, and any subsequent work leading up to the 2016 Review Conference. Historically, the CCW process is in many respects a less than favorable framework for working out robust, red-line disarmament norms. Most of the weapons the treaty covers have been regulated rather than banned out right. And the institution’s consensus-based decision-making process makes it easy for veto players to water down new normative understandings.
For this reason, despite its symbolic importance, the CCW’s practical involvement in norm-building sometimes gets overstated. Indeed, global civil society has a history of abandoning the CCW process when it appears that stone-walling will result in weak norms, and establishing independent treaty processes led by like-minded players – a process that Patrick Cottrell has called “institutional replacement.” The Ottawa process that led to the Mine Ban Treaty was the first such initiative; the Oslo process for Cluster Munitions adopted the same model. While this is an important and promising moment, the shape and trajectory of norm-building efforts will depend a great deal on the tenor and outcome of next May’s CCW meeting. And one thing is sure: if that meeting results in weaker norms that hoped for my human security advocates, NGOs may simply take their cause elsewhere.
Campaigners are justly proud and overjoyed that the issue has been taken up by the CCW barely a year after the Campaign got started, the US policy directive was released, and the issue began to surface in public awareness. If there is a precedent for a preventive arms control issue to progress so rapidly from “Terminator” titters onto the global diplomatic agenda, I am not aware of it.
But you are right to be skeptical about what can come out of the CCW process. We must demand a high bar for success, lest the CCW bake us half a loaf, and that serve to put everyone back to sleep.
A “norm” is not enough, unless it is clearly defined in strong terms. The US policy is built around the idea that commanders and operators of “autonomous and semi-autonomous weapon systems shall… exercise appropriate levels of human judgment over the use of force.” The US will no doubt push hard to have this be accepted and endorsed by the CCW as the new norm, leaving it up to states and militaries to determine what levels of human judgment are appropriate.
In particular, the US directive makes it clear that “appropriate levels” do not require direct human judgment in the final decision to kill another human being. The policy allows robots to be tasked on missions which may involve killing either targeted or unknown individuals fully autonomously. The policy allows this in two ways.
Weapon systems that are “intended” for this role can be developed, acquired, and used given the certification by 3 sub-cabinet level officials that the systems meet the (undefined) “appropriate levels” standard and its related criteria for validation.
The policy also allows “lock-on-after-launch homing munitions”, which obviously include anti-aircraft missiles but might also include long-range anti-ship missiles and loitering ground-attack missiles that search within a given kill box for an appropriate target and attack it autonomously. These are classified as “semi-autonomous” and do not require any “high-level” signatures.
The “lock-on-after-launch homing munitions” would certainly target ships, aircraft, tanks, and other military objects which contain human beings. The policy does not state that they may not target freestanding human beings.
Basically, the difference between these two routes to lethal autonomous weapons is a fine distinction of degree; for the weapons deemed “semi-autonomous” (but actually fully autonomous in final target identification and engagement decision) a greater level of human discretion in their use is implied, but the “autonomous” weapons are also not supposed to be used without “appropriate levels of human judgment” being exercised.
This policy has been incorrectly characterized as a 10-year moratorium on “fully autonomous weapons” that apply lethal force. The DoD itself denies that this is the case. I’m not sure what purpose it serves to say that the US policy is a moratorium, but the unfortunate implication is that if the policy were to be made permanent, and adopted by the CCW as a global standard, that would largely resolve the problem of autonomous weapons.
It is essential to make it clear that adoption of a vague “appropriate levels” standard by the CCW is completely unacceptable.
Equally unacceptable is the US schema in which missiles that de facto select their targets fully autonomously may be considered as only “semi-autonomous” (hence fully permissible) so long as “tactics, techniques and procedures” ensure that there are no persons or objects within the kill box which might be mistaken for the intended targets.
I am deeply concerned that the outcome of the CCW process might be something similar to the US approach. It seems virtually certain that the US will strongly argue for this, and almost equally certain that the US will strongly resist any significantly more restrictive “norm.”
Rather than accept that, it would be far better for the CCW process to stall or fail, and for the Campaign to move toward demanding a forum with a broader mandate, one which would include consideration of the threat to peace and security posed by the global robot arms race, and which could negotiate hard arms control standards based on strong principles.
For now, the Campaign should of course celebrate getting the issue on the diplomatic agenda, and look toward the CCW meeting in May. But in doing so, we must begin defining what success would look like, and distinguishing that from a papering-over of the issue that would serve only to prepare the way for tomorrow’s killer robots.
I disagree with you on your characterization on so called “lock-on-after-launch homing munitions.” What you are ageing for here is going too far and not a very realistic goal.
In today’s world, for any effective long ranged anti-ship or jet missile you basically have to have some form autonomous terminal targeting available.Jamming is a serious and growing threat.
This is especially the case in anti-ship warfare, where getting the exact positions on enemyvessels can be difficult in contested waters,
and next to impossible if the enemy has air superiority in the area immediately around their vessels.
The Navy has already acknowledged that the GPS system is susceptible and thus mid-course guidance communications and remote targeting will be unreliable in a jamming environment – hence, the autonomous nature of the LRASM for example.
Having missiles search, by themselves, an approximate area is necessary in order to located enemy vessels in this and in many other scenarios. It is the principal behind the LRASM, JASSM, basically all BVR missiles, and many other long ranged weapons in use and in development by the US and other counties.
Missiles such as the SM-6 use active radar-targeting were terminal flight control is autonomous. By banning them you would significantly erode our defense. It is the only way to easily engage in long rang in situations where you don’t have the precise whereabouts of your enemy
Furthermore, I don’t see how you could possibly ban Beyond Visual Range air-to-air missile considering they’ve been in use since the late 1950’s. I agree with you that in an ideal world, of course, you would want all military engagement to be within 20km, allowing for perfect confirmation and relying totally on human decision making. But this is not the case anymore, having over-the-horizon capabilities are simply a necessity nowadays. If we would enact what you are asking for here, it would significantly reduce current and future of the US military capabilities
We will have to disagree about what missiles are necessary. You should remember that the capabilities you are talking about do not give the US a unilateral advantage; rather, they are threats that we face as well. You correctly point out what a huge problem this is. The answer to the problem is not to plunge deeper into it.
The US needs to accept that its future military options are going to be constrained by the capabilities of other nations. Better to do this through arms control than through losses in war.