Tag: lethal autonomous weapons

Analogies in War: Marine Mammal Systems and Autonomous Weapons

Last week I was able to host and facilitate a multi-stakeholder meeting of governments, industry and academia to discuss the notions of “meaningful human control” and “appropriate human judgment” as they pertain to the development, deployment and use of autonomous weapons systems (AWS).  These two concepts presently dominate discussion over whether to regulate or ban AWS, but neither concept is fully endorsed internationally, despite work from governments, academia and NGOs.  On one side many prefer the notion of “control,” and on the other “judgment.”

Yet what has become apparent from many of these discussions, my workshop included, is that there is a need for an appropriate analogy to help policy makers understand the complexities of autonomous systems and how humans may still exert control over them.   While some argue that there is no analogy to AWS, and that thinking in this manner is unhelpful, I disagree.  There is one unique example that can help us to understand the nuance of AWS, as well how meaningful human control places limits on their use: marine mammal systems .

Continue reading

Autonomous Weapons and Incentives for Oppression

Much of the present debate over autonomous weapons systems (AWS) focuses on their use in war. On one side, scholars argue that AWS will make war more inhumane (Asaro, 2012), that the decision to kill must be a human being’s choice (Sharkey, 2010), or that they will make war more likely because conflict will be less costly to wage with them (Sparrow, 2009). On the other side, scholars argue that AWS will make war more humane, as the weapons will be greater at upholding the principles of distinction and proportionality (Müller and Simpson, 2014), as well as providing greater force protection (Arkin, 2009). I would, however, like to look at different dimension: authoritarian regimes’ use of AWS for internal oppression and political survival.

Continue reading

Autonomous or "Semi" Autonomous Weapons? A Distinction without Difference

Over the New Year, I was fortunate enough to be invited to speak at an event on the future of Artificial Intelligence (AI) hosted by the Future of Life Institute. The purpose of the event was to think through the various aspects of the future of AI, from its economic impacts, to its technological abilities, to its legal implications. I was asked to present on autonomous weapons systems and what those systems portend for the future. The thinking was that an autonomous weapon is, after all, one run on some AI software platform, and if autonomous weapons systems continue to proceed on their current trajectory, we will see more complex software architectures and stronger AIs.   Thus the capabilities created in AI will directly affect the capabilities of autonomous weapons and vice versa. While I was there to inform this impressive gathering about autonomous warfare, these bright minds left me with more questions about the future of AI and weapons.

First, autonomous weapons are those that are capable of targeting and firing without intervention by a human operator. Presently there are no autonomous weapons systems fielded. However, there are a fair amount of semi-autonomous weapons systems currently deployed, and this workshop on AI got me to thinking more about the line between “full” and “semi.” The reality, at least the way that I see it, is that we have been using the terms “fully autonomous” and “semi-autonomous” to describe the extent to which the different operational functions on a weapons system are all operating “autonomously” or if only some of them are. Allow me to explain.

We have roughly four functions on a weapons system: trigger, targeting, navigation, and mobility. We might think of these functions like a menu that we can order from. Semi-autonomous weapons have at least one, if not three, of these functions. For instance, we might say that the Samsung SGR-1 has an “autonomous” targeting function (through heat and motion detectors), but is incapable of navigation, mobility or triggering, as it is a sentry-bot mounted on a defensive perimeter.   Likewise, we would say that precision guided munitions are also semi-autonomous, for they have autonomous mobility, triggering, and in some cases navigation, while the targeting is done through a preselected set of coordinates or through “painting” a target through laser guidance.

Where we seem to get into deeper waters, though, are in the cases of “fire and forget” weapons, like the Israeli Harpy, the Raytheon Maverick heavy tank missile, or the Israeli Elbit Opher. While these systems are capable of autonomous navigation, mobility, trigger and to some extent targeting, they are still considered “semi-autonomous” because the target (i.e. a hostile radar emitter or the infra-red image of a particular tank) was at some point pre-selected by a human. The software that guides these systems is relatively “stupid” from an AI perspective, as it is merely using sensor input and doing a representation and search on the targets it identifies.   Indeed, even Lockheed Martin’s L-RASM (long-range anti-ship missile), appears to be in this ballpark, though it is more sophisticated because it can select its own target amongst a group of potentially valid targets (ships). The question has been raised whether this particular weapon slides from semi-autonomous to fully autonomous, for it is unclear how (or by whom) the decision is made.

The rub in the debate over autonomous weapons systems, and from what I gather, some of the fear in the AI community, is the targeting software. How sophisticated that software needs to be to target accurately, and, what is more, to target objects that are not immediately apparent as military in nature.   Hostile radar emitters present little moral qualms, and when the image recognition software used to select a target is relying on infra-red images of tank tracks or ship’s hulls, then the presumption is that these are “OK” targets from the beginning. I have two worries here. First, is that from the “stupid” autonomous weapons side of things, military objects are not always permissible targets. Only by an object’s nature, purpose, location, use, and effective contribution can one begin to consider it a permissible target. If the target passes this hurdle, one must still determine whether attacking it provides a direct military advantage. Nothing in the current systems seems to take this requirement into account, and as I have argued elsewhere, future autonomous weapons systems would need to do so.

Second, from the perspective of the near term “not-so-stupid” weapons, at what point would targeting human combatants come into the picture? We have AI presently capable of facial recognition with almost near accuracy (just upload an image to Facebook to find out). But more than this, current leading AI companies are showing that artificial intelligence is capable of learning at an impressively rapid rate. If this is so, then it is not far off to think that militaries will want some variant of this capacity on their weapons.

What then might the next generation of “semi” autonomous weapons look like, and how might those weapons change the focus of the debate? If I were a betting person, I’d say they will be capable of learning while deployed, use a combination of facial recognition and image recognition software, as well as infra-red and various radar sensors, and they will have autonomous navigation and mobility. They will not be confined to the air domain, but will populate maritime environments and potentially ground environments as well. The question then becomes one not solely of the targeting software, as it would be dynamic and intelligent, but on the triggering algorithm. When could the autonomous weapon fire? If targeting and firing were time dependent, without the ability to “check-in” with a human, or let’s say, that there were just too many of these systems deployed that “checking-in” were operationally infeasible due to band-width, security, and sheer man-power overload, how accurate would the systems have to be to be permitted to fire? 80%? 50%? 99%? How would one verify that the actions taken by the system were in fact in accordance with its “programming,” assuming of course that the learning system doesn’t learn that its programming is hamstringing it to carry out its mission objectives better?

These pressing questions notwithstanding, would we still consider a system such as this “semi-autonomous?” In other words, the systems we have now are permitted to engage targets – that is target and trigger – autonomously based on some preselected criteria. Would these systems that utilize a “training data set” to learn from likewise be considered “semi-autonomous” because a human preselected the training data? Common sense would say “no,” but so far militaries may say “yes.”   The US Department of Defense, for example, states that a “semi-autonomous” weapon system is one that “once activated, is intended only to engage individual targets or specific target groups that have been selected by a human operator” (DoD, 2012). Yet, at what point would we say that “targets” are not selected by a human operator? Who is the operator? The software programmer with the training data set can be an “operator,” and the lowly Airman likewise can be an “operator” if she is the one ordered to push a button, so too can the Commander who orders her to push it (though, the current DoD Directive makes a distinction between “commander” and “operator” which problematizes the notion of command responsibility even further). The only policy we have on autonomy does not define, much to my dismay, “operator.” This leaves us in the uncomfortable position that distinction between autonomous and semi-autonomous weapons is one without difference, and taken to the extreme would mean that militaries would now only need to claim their weapons system is “semi-autonomous,” much to the chagrin of common sense.

Meaningful or Meaningless Control

In May of 2014, the United Nations Convention on Conventional Weapons (CCW) first considered the issue of banning lethal autonomous weapons. Before the start of the informal expert meetings, Article 36 circulated a memorandum on the concept of “meaningful human control.” The document attempted to frame the discussion around the varying degrees of control over increasingly automated (and potentially autonomous) weapons systems in contemporary combat. In particular, Article 36 posed the question as one about the appropriate balance of control over a weapons system that can operate independently of an operator in a defined geographical area for a particular period of time ought to be. Article 36 does not define “meaningful control,” but rather seeks to generate discussion about how much control ought to be present, what “meaningful” entails, and how computer programming can enable or inhibit human control. The state parties at the CCW agreed that this terminology was crucial and that no weapons systems that lacked meaningful human control ought to be deployed. The Duck’s Charli Carpenter has written about this as well, here.

Last month in October, the United Nations Institute for Disarmament Research (UNIDIR) held a conference on the concept of meaningful human control.   Earlier this month, states again convened in Geneva at another CCW meeting and agreed to further consider the matter in April of 2015. Moreover, other civil society groups are also now beginning to think about what this approach entails.   It appears, then, that this concept has become a rallying point in the debate over autonomous weapons. Yet while we have a common term with which to agree, we are not clear on what exactly “control” requires, or what proxy’s we could utilize to make control more efficacious, such as geographic or time limits, or what “meaningful” would look like.

Today, I had an engaging discussion with a colleague on a “semi-autonomous” weapon: Lockheed Martin’s Long-Range Anti-Ship Missile (LRASM). One colleague claimed that this missile is in fact an autonomous weapon, as it selects and engages a target. Another colleague, however, claimed that this was not an autonomous weapon because a human being preselects the targets before launching the weapon. Both my colleagues are correct. Yet how can this be so?

The weapon does select and an engage a target after it is launched, and the particular nature of the LRASM is that it can navigate in denied environments where other weapons cannot. It can change course when necessary, and when it finds its way to its preselected targets, it engages in a selection these targets based upon an undisclosed identification mechanism (probably similar in image recognition to other precision guided munitions). LRASM is unique in its navigation and target cuing capabilities, as well as its ability to coordinate with other launched LRASMs. Thus the question about whether it is an autonomous weapon, then, is really a question about meaningful human control.

Is it a question about “control” once the missile reaches its target destination and then “decides” which ship amongst the convoy it will attack? Or is it a question about the selection of the grid or space that the enemy convoy occupies? At what point is the decision about “control” to be made?

I cannot here answer fully this question. However, I can raise two potential avenues for the way forward. One is to consider human control not in terms of a dichotomy (there is either a human being deliberating at every juncture and pulling a trigger or there is not), but in terms of an escalatory ladder. That is, we start with the targeting process, from the commander all the way to a targeteer or weaponeer, and examine how decisions to use lethal force are made and on what basis.   This would at least allow us to understand the different domains (air, land, sea) that we are working within, the types of targets likely found, and the desired goals to be achieved. It would also allow examination of when particular weapons systems enter the discussion. For if we have an understanding of what types of decisions, from various (perhaps automated) types of information, are made along this ladder, then we can determine whether some weapons are appropriate or not. We might even glean what types of weapons are always out of bounds.

Second, if this control ladder is too onerous a task, or perhaps too formulaic and would induce a perverse incentive to create weapons right up to a particular line of automation, then perhaps the best way to think about what “meaningful human control” entails is not to think about its presence, but rather its absence. In other words, what would “meaningless” human control look like? Perhaps it is better to define the concept negatively, by what it is not, rather than what it is. We have examples of this already, particularly with the US’s policy regarding covert action. The 1991 Intelligence Authorization Act defines covert action very vaguely, and then in more concrete terms defines what it is not (e.g. intelligence gathering, traditional or routine military or diplomatic operations, etc.). Thus clear cases of “meaningless” would be to launch a weapon system without undertaking any consideration of the targets, the likely consequences, and the presence of civilian objects or persons. Or to launch a weapon that perpetually patrols. This is of course cold comfort to those who want to ban autonomous weapons outright. Banning weapons would of course require a positive and not negative definition.

States would have to settle the question of whether any targets on a grid are fair game, or if only pre-identified targets on a grid – and not targets of opportunity- are fair game. It may also require states to become transparent about how such targets on a grid are confirmed, or how large a grid one is allowed to use. For if a search area ends up looking like the entire Pacific Ocean, that pesky question about “meaningful” raises its head again.

© 2017 Duck of Minerva

Theme by Anders NorenUp ↑