In May of 2014, the United Nations Convention on Conventional Weapons (CCW) first considered the issue of banning lethal autonomous weapons. Before the start of the informal expert meetings, Article 36 circulated a memorandum on the concept of “meaningful human control.” The document attempted to frame the discussion around the varying degrees of control over increasingly automated (and potentially autonomous) weapons systems in contemporary combat. In particular, Article 36 posed the question as one about the appropriate balance of control over a weapons system that can operate independently of an operator in a defined geographical area for a particular period of time ought to be. Article 36 does not define “meaningful control,” but rather seeks to generate discussion about how much control ought to be present, what “meaningful” entails, and how computer programming can enable or inhibit human control. The state parties at the CCW agreed that this terminology was crucial and that no weapons systems that lacked meaningful human control ought to be deployed. The Duck’s Charli Carpenter has written about this as well, here.
Last month in October, the United Nations Institute for Disarmament Research (UNIDIR) held a conference on the concept of meaningful human control. Earlier this month, states again convened in Geneva at another CCW meeting and agreed to further consider the matter in April of 2015. Moreover, other civil society groups are also now beginning to think about what this approach entails. It appears, then, that this concept has become a rallying point in the debate over autonomous weapons. Yet while we have a common term with which to agree, we are not clear on what exactly “control” requires, or what proxy’s we could utilize to make control more efficacious, such as geographic or time limits, or what “meaningful” would look like.
Today, I had an engaging discussion with a colleague on a “semi-autonomous” weapon: Lockheed Martin’s Long-Range Anti-Ship Missile (LRASM). One colleague claimed that this missile is in fact an autonomous weapon, as it selects and engages a target. Another colleague, however, claimed that this was not an autonomous weapon because a human being preselects the targets before launching the weapon. Both my colleagues are correct. Yet how can this be so?
The weapon does select and an engage a target after it is launched, and the particular nature of the LRASM is that it can navigate in denied environments where other weapons cannot. It can change course when necessary, and when it finds its way to its preselected targets, it engages in a selection these targets based upon an undisclosed identification mechanism (probably similar in image recognition to other precision guided munitions). LRASM is unique in its navigation and target cuing capabilities, as well as its ability to coordinate with other launched LRASMs. Thus the question about whether it is an autonomous weapon, then, is really a question about meaningful human control.
Is it a question about “control” once the missile reaches its target destination and then “decides” which ship amongst the convoy it will attack? Or is it a question about the selection of the grid or space that the enemy convoy occupies? At what point is the decision about “control” to be made?
I cannot here answer fully this question. However, I can raise two potential avenues for the way forward. One is to consider human control not in terms of a dichotomy (there is either a human being deliberating at every juncture and pulling a trigger or there is not), but in terms of an escalatory ladder. That is, we start with the targeting process, from the commander all the way to a targeteer or weaponeer, and examine how decisions to use lethal force are made and on what basis. This would at least allow us to understand the different domains (air, land, sea) that we are working within, the types of targets likely found, and the desired goals to be achieved. It would also allow examination of when particular weapons systems enter the discussion. For if we have an understanding of what types of decisions, from various (perhaps automated) types of information, are made along this ladder, then we can determine whether some weapons are appropriate or not. We might even glean what types of weapons are always out of bounds.
Second, if this control ladder is too onerous a task, or perhaps too formulaic and would induce a perverse incentive to create weapons right up to a particular line of automation, then perhaps the best way to think about what “meaningful human control” entails is not to think about its presence, but rather its absence. In other words, what would “meaningless” human control look like? Perhaps it is better to define the concept negatively, by what it is not, rather than what it is. We have examples of this already, particularly with the US’s policy regarding covert action. The 1991 Intelligence Authorization Act defines covert action very vaguely, and then in more concrete terms defines what it is not (e.g. intelligence gathering, traditional or routine military or diplomatic operations, etc.). Thus clear cases of “meaningless” would be to launch a weapon system without undertaking any consideration of the targets, the likely consequences, and the presence of civilian objects or persons. Or to launch a weapon that perpetually patrols. This is of course cold comfort to those who want to ban autonomous weapons outright. Banning weapons would of course require a positive and not negative definition.
States would have to settle the question of whether any targets on a grid are fair game, or if only pre-identified targets on a grid – and not targets of opportunity- are fair game. It may also require states to become transparent about how such targets on a grid are confirmed, or how large a grid one is allowed to use. For if a search area ends up looking like the entire Pacific Ocean, that pesky question about “meaningful” raises its head again.
I think the ladder makes the most sense, because that is in effect what we already have. It strikes me that the LRASM is a difference in degree rather than in kind from older weapons systems like the ubiquitous AIM-9 air-to-air missile (both radar and infrared homing variants) to the more sophisticated AIM-120. Meaningful then rides specifically on the selection of the target I suppose, which is where the differences in degree for the LRASM come into play. But if the LRASM could be relied on to hit a particular kind of ship, then I suppose human targeting is still at work, just at a greater degree of abstraction. Slippery ladder indeed.
Yes! It is like the brits say “climbing the greasy pole”. :-)