Today, Human Rights Watch (HRW) released a new report, Losing Humanity: The Case Against Killer Robots, becoming the most influential NGO to date to join an emerging call for a preemptive norm against the deployment of autonomous weapons.
The International Committee for Robot Arms Control, an association of scientists and philosophers, proposed such a norm back in 2010 at a meeting in Berlin; the outgoing President of the International Committee of the Red Cross (ICRC), Jakob Kellenberger, issued a statement in September 2011 suggesting the organization remained seized of developments in the area of autonomous weapons; Article36.org was the first NGO to suggest an outright ban in April of this year. Indeed, some members of the scholarly community have been calling for a ban since at least 2004.
But the adoption of the issue by Human Rights Watch signals a watershed moment in an emerging global concern with the outsourcing of targeting decisions to machines, for three reasons.
First, Human Rights Watch will bring unprecedented visibility and legitimacy to what was a previously somewhat fringe issue due to the organization’s credibility and centrality within global civil society. As I previously noted in an article still theoretically relevant though now substantively out of date (which used the absence of a campaign against autonomous weapons as a case study), the presence or absence of HRW and ICRC from weapons campaigns is the single most important variable explaining why some campaigns become prominent globally and others remain marginalized.
Second, although HRW was not the first to call for a killer robot ban, they are the first NGO to publish, in partnership with Harvard’s International Human Rights Clinic, a comprehensive report on the topic. This is important because as significant as whether an organization “adopts” an issue is how they adopt it. A full report is more meaningful than a press release or a meeting or an agenda item on a website. It signals a commitment of thought and resources. It conveys heft. It suggests that Human Rights Watch is positioning itself to lead a broad-based humanitarian disarmament coalition around this issue, one likely to include many of its former partners from the landmines and cluster campaigns.
Human Rights Watch’s report has also significantly honed the “killer robot” frame. Previously, concern with fully autonomous weapons was conflated with concerns over drones; and arguments against killer robots ran the gamut from “robots can’t discriminate” to “robots make wars more likely” to whether there should be a “human right to be killed only by other humans.” Losing Humanity distinguishes clearly between weapons with a human “in-the-loop,” “on-the-loop,” and “out-of-the-loop,” focusing primarily on fully autonomous weapons. And it focuses the lens on the one dimension of this issue likeliest to speak to the humanitarian law community and to militaries: protection of civilians.
“Giving machines the power to decide who lives and dies on the battlefield would take technology too far,” said Steve Goose, Arms Division director at Human Rights Watch. “Human control of robotic warfare is essential to minimizing civilian deaths and injuries.”
For all these reasons, we can now expect the killer robots issue to move from the sidelines to the center of the humanitarian disarmament agenda for the foreseeable future. I will have more to say about the arguments in the report after I’ve read it more closely. And since autonomous weapons have been one of several cases that I’ve followed since 2007 for my new book on human security campaigns I will have more to say about its origins in days to come – stay tuned.
It’s clear that “Human control of robotic warfare is essential to minimizing civilian deaths and injuries.” Unfortunately, it is less clear from this argument at what level that control needs to be exerted, or that “robotic warfare” should be banned and not just regulated.
IHL does not require that civilian deaths and injuries be minimized, which would imply that no military action which risks a single civilian should ever be taken. Rather, it requires “expected… incidental” civilian harms not “be excessive in relation to the… military advantage anticipated”. If in fact the “military advantage” anticipated by the use of autonomous weapons systems (AWS) would be substantial, it would be hard to justify an outright ban if it could be argued that, under restrictions dependent on the tactical situation and available technology, with an appropriate level of human supervision and control, any increase in the risk or expectation of civilian harm would be small or nonexistent.
Plausible arguments can be made, under some assumptions (either that the AI capabilities of the AWS would be good, or that the circumstances of their use would be restricted), that civilian harms could be expected to be even less using AWS than with alternatives.
The HRW report is filled with unsupported assertions about what machines can’t do: “…distinguishing between a fearful civilian and a threatening enemy combatant requires a soldier to understand the intentions behind a human’s actions, something a robot could not do.” Really? What level of understanding of intentions is necessary here? What level of understanding do soldiers typically have? How do we know a robot could not understand intentions? “Even if the development of fully autonomous weapons with humanlike cognition became feasible, they would lack certain human qualities, such as emotion, compassion, and the ability to understand humans.” Since there is research which aims to develop “emotional robots,” how do we know this is true? Maybe the answer is that compassion is not the first thing we would design into a “weapon,” But something is missing from the argument that even if a robot has “humanlike cognition” it would be unable to “understand humans” well enough to satisfy the law of war.
I am afraid that focusing the lens on claims that AWS are inevitably a greater threat to civilians than other weapons means getting mired in debates about AI limits and military doctrines, and not seeing the big picture. As I argue here, this is a weak foundation for the movement to ban “killer robots” and not what is really behind all the concern. This approach will lead to Codes of Ethics (called for by HRW) and a “slow, cautious” approach to AWS deployment and use (at first…), not to a global convention… which, incidentally, should address restrictions on drones as well.
The reason for opposing robotic warfare is that it poses a threat to humanity. There are many aspects to this threat besides the possibility of their indiscriminate killing of civilians.
Robots do threaten to make war more likely, either because the political and monetary costs and risks are perceived as lower, or because automated systems too complex for human comprehension manage to initiate war on their own.
Humans do have a right not to be killed on the decision of machines, and this does not deserve to be parodied as a “human right to be killed only by other humans” (as if cancer and heart disease didn’t exist). This is actually one of the most powerful arguments against killer robots, and even if it doesn’t interest lawyers and militaries, I think it is one of those “self-evident truths” that most people will agree with..
The HRW report points to “the Martens Clause, which prohibits weapons that run counter to the “dictates of public conscience.”” It may be that this is the hook in IHL to on which to must hang most of weight, for now. But I think that the case against putting the guns in the hands of machines is better expressed in terms of new principles, because this is a new threat.
Hi Mark,
Thanks for your thoughtful comment. There’s a lot of substance here, some of which I will want to take up in future posts on this broad topic, which you can look for in coming weeks as I continue to engage with this. We can also email directly if you’d like to swap ideas offline.
For now, on the record, a couple of thoughts from a political scientist’s perspective. First, I did not mean to sound mocking at all; I had not thought through my imprecise articulation of the “human right” in question. I’m going to leave it as is in this post, but I agree that your formulation is clearer and can use it in future discussion of that topic. Also, you’re right that I think in terms of the Marten’s Clause – which will itself be the basis for a future post – this is the element that is probably the most obvious in terms of the general public opprobrium toward killer robots.
Second, it’s true there are a variety of reasons for opposing autonomous weapons, and a variety of arguments made to that effect (I personally find some more compelling than others). However a lot of research including my own has showed that the likelihood of an issue become salient on global political agendas often has to do with how it is framed. Seasoned human rights campaigners often work with the slice of the argument that is likeliest to be effective with a particular target audience – either the governments whose policies they want to change, or the organizations they want in their coalition – leaving aside the rest, for purely strategic reasons. And organizations are also limited in how they can frame an issue by their own organizational culture. HRW’s mandate, for example, requires them to refer to human rights and humanitarian law, and their organizational culture orients them toward the protection of civilians (it’s one reason their incendiary weapons work focuses primarily on civilians rather than combatants for example). But suffice to say a lot of organizations are going to join this campaign and each of them will probably bring a slightly different perspective to this topic based on their own organizational ideals. There is certainly more than one way to slice this cake.
Finally, since framing processes are often about tactics – norm-building is a political process, after all, and the perfect can be the enemy of the good – it’s not unheard of for the frame chosen by a particular actor at a particular point in time to be subject to critique. I’m not sure if that’s the case here, and I think you raise some really interesting points. I guess we’ll see whether and how the discussions with stakeholders bring out the kinds of rebuttals you flag, and how the humanitarian disarmament community responds to them. It’s often the case that campaign frames shift a bit as it runs its course.
I will write more later about why I think the protection of civilians argument works politically for now, regardless of whether you or I or anyone else thinks it’s normatively the best or only frame – and as I’m reading the report, one of the things I’m noticing is how creatively HRW/IHRC have built some of the wider arguments you mention INTO that frame. More thoughts soon.
Thanks for the heads up on this, Charli. One minor correction: It’s Jakob, not Jason, Kellenberger.
Thanks for flagging that silly typo. Perils of blogging at midnight. Fixed.
It should be a human right not to be killed
by anyone your surviving relatives cannot sue after the killing.
very clever