In a new piece up at Foreign Affairs on the killer robot debate, I attempt to distinguish between what we know and what we can only speculate about around the ethics / legality of autonomous weapons. The gist:
Both camps have more speculation than facts on their side… [But] the bigger problem isn’t that some claims in this debate are open to question on empirical grounds, rather that so many of them simply cannot be evaluated empirically, since there is no data or precedent with which to weigh discrimination and proportionality against military necessity.
So, instead of resting on discrimination and proportionality principles as with earlier weapons ban campaigns, the lethal machines debate is converging around two very different questions. First, in situations of uncertainty, does the burden of proof rest on governments, to show that emerging technologies meet humanitarian standards, or on global civil society, to show that they don’t? And second, even if autonomous systems could one day be shown to be useful and lawful in utilitarian terms, is a deeper underlying moral principle at stake in outsourcing matters of life or death to machines?
The disarmament camp argues yes to both; techno-optimists argue no. To some extent these are questions of values, but each can also be empirically evaluated by the social realities of international normative precedent. In each case, those cautioning against the untrammeled development of unmanned military technology are on firmer ground.
Read the whole thing here.
Open Democracy has also published a more detailed version of my survey results on which this argument is based.
Prof. Carpenter,
Thanks for the article. Wanted to push back on a couple of points (and in doing so break my ‘dissertation triage, can’t write about Preds moratorium.) Would like to note that you’re one of very few people who I think have helpful things to say about this topic (and two of those people are named Dan and at Georgetown.) Of course, these are my thoughts only, and dont necessarily represent the DoD, USAF, RPA community, Pred/Reaper Community, etc. etc. That said,
1. I think we should take a more critical eye to the ‘distance = loss of honor’ trope. I think that at least in part, it reflects a sentimentality more rooted in a desire for vicarious experiences fueled by a toxic combination of violent entertainment and relatively few per capita veterans (who would bring a more sober view of what war is,) rather than in a Just War tradition. Melville said these sorts of things about the USS Monitor in the Civil War. The more I reflect on this trend, the more I see it as part of the ‘war as violent entertainment’ trend so rightly excoriated by Lt Col Grossman (On Combat, On Killing.) The heart of Just War tradition is about achieving a better peace with a minimum of suffering; building a crucible of violent self-actualization to test people’s mettle is from a very different tradition entirely. I greatly prefer the former tradition to the latter, and within that tradition these platforms fare better than in the ‘war as crucible’ tradition.
2. In general, I find the lack of control theory in this discussion disturbing. After all, the whole discussion turns on a point about control systems, and the academic fields who have rigorous things to say about these things (i.e. Electrical Engineering) are generally missing. I’m a big fan of David Mindell (MIT ESD)’s work on control systems, and he looks at both analog and digital logics as control systems. Even a Welsh Longbow has an analog logic to its lethal flight, and there is some control system between the archer and the target. This is one reason I find this category so frustrating – we’re comfortable with missiles, which are very much lethal semi-autonomous UAVs, some smarter and more autonomous than the Pred/Reaper, yet we find them familiar and comfortable. Lethal machines are as old as ballistae, and we do the discussion a disservice to create a category when there is a spectrum.
I believe a better way to see this is to realize that almost all modern weapons have some control system ‘air gap’ between the last red button and their effects. This doesnt excuse away the discussion, but it makes it far more rigorous – in fact, ‘killer robots’ is more likely to excuse away this more difficult discussion by holding up Preds as sin-eaters for the automation ubiquitous in almost all lethal systems. On the other end of the spectrum full autonomy isn’t really that meaningful. As Prof. Nexon points out, if weapons were really all the way autonomous, why would they go and fight our wars? There is always both an element of human control and an element of automation in future weapons – there is a very important discussion to have on the nature and distance of the ‘air gap,’ but in needs to be informed by stronger theoretical foundations in cybernetics, etc.
2a. Precision in terminology. You mention ‘drones’ (to my thought, a word which contains a great amount of emotion in an incoherent definition, but that’s another matter), but the Pred is far less automated than many manned aircraft. You accurately represent the question as having to do with the AEGIS/Vincennes system far moreso than the ‘fly by wireless’ directly remote controlled Pred/Reapers, but since the recurring Pred picture is typically the one that provides the background for book covers, hook lines, etc., this clouds the discussion greatly. For the X-47, absolutely, we need to think about these things. By the same token, for the F-22 and the F-35, we need to think about the degree of automation in these systems. But the issues of significance for Pred/Reaper have to do with sovereignty, kinetic strikes, etc. rather than automation. It would be helpful to be more clear that the ‘killer robot’ debate is orthogonal to the ‘persistent remote air campaign’ debate.
3. If the phrase ‘reverse securitization’ is still unclaimed, I’d like to apply it to most of what’s happening in this debate. In the sense that securitization makes a topic high voltage, the word ‘drone’ has made a nascent taboo, but it deeply interferes with the ability to meaningfully speak about these sorts of things. As with securitization, this trope has allowed shoddy research to pass with less scrutiny than it should have (thinking about the ‘Living Under Drones’ report. NFI on this, but glad to talk offline as able.) Far more importantly, I think that this can yield some counter-productive results. While the nuclear taboo analogy has been raised, nuclear tech is discrete and distinctive, whereas these techs are combinations of openly available developments in communications and computing technology. A ‘drones taboo’ would be difficult to enforce, and might even encourage states to get a tech which is has far more technical limitations than the ‘scary killer robots’ meme would lead one to believe. The concerning part is that such a taboo would likely delay development of autonomous systems for benign, non-violent actors (Snowgoose for disaster response, etc.) while failing to restrain states.
Not to be too cynical about this (and this probably has to do with frustration-with-NGOs while researching anti-human-trafficking cooperation), but there’s a potential ‘NGO Scramble’ logic in all of this. In the course of this discussion, more than a few careers will be made in the legal and human rights field, regardless of how it turns out on the policy side. Perhaps the whole discussion could be a bit more introspective, and realize the possibility of perverse incentives in how we frame these things.
All of that said, enjoyed the article, and looking forward to the next one.
V/r
Dave
Dave,
Thanks for the very rich comment. Quick replies, in order:
1) I agree with you on the distance/loss of honor
trope. I hear this a lot w/ respect to drones and in my mind today’s stand-off weapons are not qualitatively different from the longbow in that regard. The “loss of honor” trope that I was describing in the article is a little different, it has less to do with stand-off weaponry per se as a way to minimize losses while hitting enemy combatants, than with the idea that warriors should be protected from fire to the point that enemy civilians are put in harm’s way.
2) Your points on the air-gap are very well taken. I
agree more thinking on this needed. Human Rights Watch’s report did a good job, I think, of treating autonomy as a continuum, so did the DoD Directive and my guess is so will whatever the ICRC writes up on this subject. One of the interesting points of debate if we get into a treaty process is going to be how to define points on that continuum, and how to define the point where we go too far. For example it is very different to say a weapon is fully autonomous when a human being’s consent is no longer required to fire, versus when a human no longer has the right to veto a computer decision. But the question of how similar and different kinds of future autonomy are from what we already have in
place today is very important.
3) and 4) Precision in terminology. YES. I am absolutely
NOT talking about drones in my article and completely agree that people including campaigners are unhelpfully conflating the two. The question is not tele-operation but automation of targeting systems – taking humans out of the loop. And stand-off weapons aside, campaigners are saying that fully automated weapons, where the decision to target and fire itself is outsourced to a computer, is a qualitatively different thing. The reference to drones in the
first paragraph was mainly designed to differentiate the two. The campaign (at least this campaign) is not looking for a “drones taboo” – drones have many non-lethal applications as you point out. They are not even looking for a taboo on armed drones that keep a human in the loop (although there is a separate movement against those that comes from a very different legal/analytical logic). The Campaign to Stop Killer Robots just wants to keep lethal decision-making in human hands. So the taboo would be on fully autonomous lethal systems, not drones per se.