Tag: killer robots

Ethical Robots on the Battlefield?

Every day it seems we hear more about the advancements of artificial intelligence (AI), the amazing progress in robotics, and the need for greater technological improvements in defense to “offset” potential adversaries.   When all three of these arguments get put together, there appears to be some sort of magic alchemy that results in widely fallacious, and I would say pernicious, claims about the future of war.  Much of this has to do, ultimately, with a misunderstanding about the limitations of technology as well as an underestimation of human capacities.   The prompt for this round of techno-optimism debunking is yet another specious claim about how robotic soldiers will be “more ethical” and thus “not commit rape […] on the battlefield.”

There are actually three lines of thought here that need unpacking.   The first involves the capabilities of AI with relation to “judgment.”  As our above philosopher contends, “I don’t think it would take that much for robot soldiers to be more ethical.  They can make judgements more quickly, they’re not fearful like human beings and fear often leads people making less than optional decisions, morally speaking [sic].”  This sentiment about speed and human emotion (or lack thereof) has underpinned much of the debate about autonomous weapons for the last decade (if not more).  Dr. Hemmingsen’s views are not original.  However, such views are not grounded in reality.

Continue reading

Improvised Explosive Robots

A common argument made in favor of the use of robotics to deliver (lethal) force is that the violence used is mediated in such a way that it naturally de-escalates a situation.  In some versions, this is due to the fact that the “robot doesn’t feel emotions,” and so is not subject to fear or anger.  In other strands, the argument is that due to distance in time and space, human operators are able to take in more information and make better judgments, including to use less than lethal or nonlethal force.  These debates have, up until now, mostly occurred with regards to armed conflict.  However, with the Dallas police chief’s decision to use a bomb disposal robot to deliver lethal force to the Dallas gunman, we are now at a new dimension of this discussion: domestic policing.

Now, I am not privy to all of the details of the Dallas police force, nor am I going to argue that the decision to use lethal force against Micah Johnson was not justified.  The ethics of self- and other-defense would argue that the Mr. Johnson’s actions and continued posturing of a lethal and imminent threat meant that officers were justified in using lethal force to protect themselves and the wider community.   Moreover, state and federal law allows officers to use “reasonable” amounts of force, and not merely the minimal amount of force to carry out their duties.   Thus I am not going to argue the ethics or the legality of the use of a robot to deliver a lethal blast to an imminent threat.

What is of concern, however, is how the arguments used in favor of increased use of robotics in situations of policing (or war) fail to take into consideration psychological and empirical facts.  If we take these into account, what we might glean is that the trend actually goes in the other direction: that the availability and use of robotics may actually escalate the level of force used by officers.

Continue reading

Kill Webs: The Wicked Problem of Future Warfighting

The common understanding in military circles is that the more data one has, the more information one possess.  More information leads to better intelligence, and better intelligence produces greater situational awareness.  Sun Tzu rightly understood this cycle two millennia ago: “Intelligence is the essence in warfare—it is what the armies depend upon in their every move.” Of course, for him, intelligence could only come from people, not from various types of sensor data, such as radar signatures or ship’s pings.

Pursuing the data-information-intelligence chain is the intuition behind the newly espoused “Kill Web” concept.  Unfortunately, however, there is scant discussion about what the Kill Web actually is or entails.  We have glimpses of the technologies that will comprise it, such as integrating sensors and weapons systems, but we do not know how it will function or the scope of its vulnerabilities.

Continue reading

The New Mineshaft Gap: Killer Robots and the UN

This past week I was invited to speak as an expert at the United Nations Informal Meeting of Experts under the auspices of the Convention on Certain Conventional Weapons (CCW). The CCW’s purpose is to limit or prohibit certain conventional weapons that are excessively injurious or have indiscriminate effects. The Convention has five additional protocols banning particular weapons, such as blinding lasers and cluster bombs. Last week’s meetings was focused on whether the member states ought to consider a possible sixth additional protocol on lethal autonomous weapons or “killer robots.”

My role in the meeting was to discuss the military rationale for the development and deployment of autonomous weapons. My remarks here reflect what I said to the state delegates and are my own opinions on the matter. They reflect what I think to be the central tenet of the debate about killer robots: whether states are engaging in an old debate about relative gains in power and capabilities and arms races. In 1964, the political satire Dr. Strangelove finds comedy in that even in the face of certain nuclear annihilation between the US and the former Soviet Union, the US strategic leaders were still concerned with relative disparity of power: the mineshaft gap. The US could not allow the Soviets to gain any advantage in “mineshaft space” – those deep underground spaces where the world’s inhabitants would be forced to relocate to keep the human race alive – because the Soviets would certainly continue an expansionist policy to take out the US’s capability once humanity could emerge safely from nuclear contamination.

Continue reading

Meaningful or Meaningless Control

In May of 2014, the United Nations Convention on Conventional Weapons (CCW) first considered the issue of banning lethal autonomous weapons. Before the start of the informal expert meetings, Article 36 circulated a memorandum on the concept of “meaningful human control.” The document attempted to frame the discussion around the varying degrees of control over increasingly automated (and potentially autonomous) weapons systems in contemporary combat. In particular, Article 36 posed the question as one about the appropriate balance of control over a weapons system that can operate independently of an operator in a defined geographical area for a particular period of time ought to be. Article 36 does not define “meaningful control,” but rather seeks to generate discussion about how much control ought to be present, what “meaningful” entails, and how computer programming can enable or inhibit human control. The state parties at the CCW agreed that this terminology was crucial and that no weapons systems that lacked meaningful human control ought to be deployed. The Duck’s Charli Carpenter has written about this as well, here.

Last month in October, the United Nations Institute for Disarmament Research (UNIDIR) held a conference on the concept of meaningful human control.   Earlier this month, states again convened in Geneva at another CCW meeting and agreed to further consider the matter in April of 2015. Moreover, other civil society groups are also now beginning to think about what this approach entails.   It appears, then, that this concept has become a rallying point in the debate over autonomous weapons. Yet while we have a common term with which to agree, we are not clear on what exactly “control” requires, or what proxy’s we could utilize to make control more efficacious, such as geographic or time limits, or what “meaningful” would look like.

Today, I had an engaging discussion with a colleague on a “semi-autonomous” weapon: Lockheed Martin’s Long-Range Anti-Ship Missile (LRASM). One colleague claimed that this missile is in fact an autonomous weapon, as it selects and engages a target. Another colleague, however, claimed that this was not an autonomous weapon because a human being preselects the targets before launching the weapon. Both my colleagues are correct. Yet how can this be so?

The weapon does select and an engage a target after it is launched, and the particular nature of the LRASM is that it can navigate in denied environments where other weapons cannot. It can change course when necessary, and when it finds its way to its preselected targets, it engages in a selection these targets based upon an undisclosed identification mechanism (probably similar in image recognition to other precision guided munitions). LRASM is unique in its navigation and target cuing capabilities, as well as its ability to coordinate with other launched LRASMs. Thus the question about whether it is an autonomous weapon, then, is really a question about meaningful human control.

Is it a question about “control” once the missile reaches its target destination and then “decides” which ship amongst the convoy it will attack? Or is it a question about the selection of the grid or space that the enemy convoy occupies? At what point is the decision about “control” to be made?

I cannot here answer fully this question. However, I can raise two potential avenues for the way forward. One is to consider human control not in terms of a dichotomy (there is either a human being deliberating at every juncture and pulling a trigger or there is not), but in terms of an escalatory ladder. That is, we start with the targeting process, from the commander all the way to a targeteer or weaponeer, and examine how decisions to use lethal force are made and on what basis.   This would at least allow us to understand the different domains (air, land, sea) that we are working within, the types of targets likely found, and the desired goals to be achieved. It would also allow examination of when particular weapons systems enter the discussion. For if we have an understanding of what types of decisions, from various (perhaps automated) types of information, are made along this ladder, then we can determine whether some weapons are appropriate or not. We might even glean what types of weapons are always out of bounds.

Second, if this control ladder is too onerous a task, or perhaps too formulaic and would induce a perverse incentive to create weapons right up to a particular line of automation, then perhaps the best way to think about what “meaningful human control” entails is not to think about its presence, but rather its absence. In other words, what would “meaningless” human control look like? Perhaps it is better to define the concept negatively, by what it is not, rather than what it is. We have examples of this already, particularly with the US’s policy regarding covert action. The 1991 Intelligence Authorization Act defines covert action very vaguely, and then in more concrete terms defines what it is not (e.g. intelligence gathering, traditional or routine military or diplomatic operations, etc.). Thus clear cases of “meaningless” would be to launch a weapon system without undertaking any consideration of the targets, the likely consequences, and the presence of civilian objects or persons. Or to launch a weapon that perpetually patrols. This is of course cold comfort to those who want to ban autonomous weapons outright. Banning weapons would of course require a positive and not negative definition.

States would have to settle the question of whether any targets on a grid are fair game, or if only pre-identified targets on a grid – and not targets of opportunity- are fair game. It may also require states to become transparent about how such targets on a grid are confirmed, or how large a grid one is allowed to use. For if a search area ends up looking like the entire Pacific Ocean, that pesky question about “meaningful” raises its head again.

“The UN v. Skynet?” An ISA Teaser

As the gods of the International Studies Association have seen fit to place my panel at 8:15 on a Saturday morning, I decided to advertise my talk in the blogosphere in hopes of drumming up some attendees. Below please see the teaser trailer for my working paper this year, which explores the impact of science fiction on global policy making in the area of autonomous weapons.

The paper itself is not yet ready for distribution (research is still in progress), but I should be able to circulate later this year and feedback at the panel will help me refine my conceptual framework – so if you are interested in these matters please come join us in the Richmond Room at the Toronto Hilton this Saturday! The panel, organized by UBC’s Chris Tenove, is entitled “Representation Across Borders”: Richard Price is chairing and other speakers include Wendy Wong, Sirin Duygulu and Hans-Peter Schmitz. Panel abstract is below the fold.

Continue reading

Resistance is Not Futile.

A claim common among opponents of a treaty ban on autonomous weapon systems (AWS) is that treaties banning weapons don’t work – suggesting efforts to arrest the development of AWS are an exercise in futility. Now this claim has been picked up uncritically by the editors at Bloomberg, writing in the derisively titled, “No Really, How Do We Keep Robots From Destroying Humans?”:

“Bans on specific weapons systems — such as military airplanes or submarines — have almost never been effective in the past. Instead, legal prohibitions and ethical norms have arisen that effectively limit their use. So a more promising approach might be to adapt existing international law to govern autonomous technology — for instance, by requiring that such weapons, like all others, can’t be used indiscriminately or cause unnecessary suffering.”

borgThe editors point out a valid distinction between weapons that are banned outright versus more generic questions of how the use of a specific weapon may or may not be lawful (the principles of proportionality and distinction apply to the use of all weapons). But they also make a conceptual and a causal error, and in so doing woefully underestimate the political power of comprehensive treaty bans. Continue reading

The State of the Killer Robot Debate

In a new piece up at Foreign Affairs on the killer robot debate, I attempt to distinguish between what we know and what we can only speculate about around the ethics / legality of autonomous weapons. The gist:

Both camps have more speculation than facts on their side… [But] the bigger problem isn’t that some claims in this debate are open to question on empirical grounds, rather that so many of them simply cannot be evaluated empirically, since there is no data or precedent with which to weigh discrimination and proportionality against military necessity.

So, instead of resting on discrimination and proportionality principles as with earlier weapons ban campaigns, the lethal machines debate is converging around two very different questions. First, in situations of uncertainty, does the burden of proof rest on governments, to show that emerging technologies meet humanitarian standards, or on global civil society, to show that they don’t? And second, even if autonomous systems could one day be shown to be useful and lawful in utilitarian terms, is a deeper underlying moral principle at stake in outsourcing matters of life or death to machines?

The disarmament camp argues yes to both; techno-optimists argue no. To some extent these are questions of values, but each can also be empirically evaluated by the social realities of international normative precedent. In each case, those cautioning against the untrammeled development of unmanned military technology are on firmer ground.

Read the whole thing hereContinue reading

A Network Explanation for the Rise of Global Social Issues

I am delighted to report that as of last Friday at 7:02pm I have completed final revisions on my latest book manuscript. This culminates a project on issue neglect that started with my observations about children born of war, emerged as a theory of “agenda-vetting,” and involved a detailed NSF-funded study of the rise and fall of issues in the human security network. It also includes detailed case studies on several norm-building campaigns I’ve been following since 2007: the campaign to make amends to civilians harmed in legitimate battle operations, the campaign to ban infant male circumcision, and the campaign to ban the development and use of autonomous weapons.

I am told by the editor at Cornell University Press it should hopefully be on the shelves in time for next year’s ISA conference. For readers who have long followed my work on this project, which coincided with the start of my blogging career, I offer below the fold the first few paragraphs of the book as a sneak preview. Continue reading

War Law, the “Public Conscience” and Autonomous Weapons

In the Guardian this morning, Christof Heyns very neatly articulates  some of the legal arguments with allowing machines the ability to target human beings autonomously – whether they can distinguish civilians and combatants, make qualitative judgments, be held responsible for war crimes. But after going through this back and forth, Heyns then appears to reframe the debate entirely away from the law and into the realm of morality:

The overriding question of principle, however, is whether machines should be permitted to decide whether human beings live or die.

But this “question of principle” is actually a legal argument itself, as Human Rights Watch pointed out last November in its report Losing Humanity (p. 34): that the entire idea of out-sourcing killing decisions to machine is morally offensive, frightening, even repulsive, to many people, regardless of utilitarian arguments to the contrary: Continue reading

How Do Americans Feel About Fully Autonomous Weapons?

Opinion_Ideology_AWSAccording to a new survey I’ve just completed, not great. As part of my ongoing research into human security norms, I embedded questions on YouGov’s Omnibus survey asking how people feel about the potential for outsourcing lethal targeting decisions to machines. 1000 Americans were surveyed, matched on gender, age, race, income, region, education, party identification, voter registration, ideology, political interest and military status. Across the board, 55% of Americans opposed autonomous weapons (nearly 40% were “strongly opposed,”) and a majority (53%) expressed support for the new ban campaign in a second question.
Continue reading

© 2017 Duck of Minerva

Theme by Anders NorenUp ↑