A common refrain from critics of the campaign to ban autonomous weapons is that these weapons are “inevitable.” If that’s true, then efforts to mitigate or pre-empt their use are not just a waste of time but a dangerous distraction from the real issue: staying ahead in an impending robot arms race or, at least, making sure that the weapons (which will definitely be built and deployed) have the best “humanitarian” programming possible. My colleague Michael Horowitz made the former argument recently in Foreign Policy. Kenneth Anderson and Matthew Waxman (Waxman is slated to speak tomorrow at the Experts’ Meeting on Autonomous Weapons in Geneva, are known for making the latter argument. According to these commentators, resistance is futile.
But as I’ve written before, resistance is actually not entirely futile. Indeed a fair amount of humanitarian disarmament history demonstrates that just because countries can develop and deploy a weapon doesn’t mean they necessarily will. Indeed, Mines Action Canada points out the same in a Memorandum to CCW Delegates circulating at the meeting. Therein, they remind diplomats that a barely-deployed weapon has been banned before: blinding lasers were banned in the early 1990s in response to humanitarian concerns about the superfluous injury caused by such weapons, before they were widely used, despite the fact that many nations had money in the blinding laser game and despite the fact that the weapons had considerable military utility.
MAC’s memo focuses on blinding lasers because of the diplomatic analogies between today’s meeting and that earlier CCW process, but there are even earlier examples of emerging military technologies being banned early and quickly for humanitarian reasons. In fact, the first weapon ever to be subject to a multi-lateral treaty ban was banned before it was widely deployed: the expanding or “dum-dum” bullet. Dum-dums were designed to flatten upon impact and thus created superfluous wounds. Exploding bullets, whose horrors were evident after the American Civil War, were banned even earlier with the St. Petersburg Declaration. Flattening bullets or “dum-dums” were developed in the late 19th century and were quickly banned outright by the Hague convention of 1899 before they were widely used on the international battlefield (though they had been used in British colonial wars). According to Robin Copeland and Dominique Loye, the ban has been widely adhered to, though according to Wikipedia they remain in use for hunting and (perhaps ironically) in police operations. At any rate, they too represent a case of an emerging military technology with clear utility that was abandoned through international declaration before they were widely used.
In short, no military technology is “inevitable.” And neither are killer robots. Whether we plunge forward simply because we can or establish a precautionary principle or red-line ban against their use will be entirely a matter of political will and imagination.
I knew one of the negotiators on the Blinding Weapons Convention. Have you looked at the language of that Protocol – “It is prohibited to employ laser weapons specifically designed, as their sole combat function or as one of their combat functions…” ie: if it “accidentally” blinds someone, that’s actually not a violation. What state WOULDN’T sign up for that? After some Congressional pressure, the Pentagon basically shrugged its shoulders and was pretty happy with the eventual language. If anything, that Protocol is probably another indication of just how useless weapons treaties are, generally.
Also, the “undetectable fragments” Protocol is a good example of banning a weapon that did not exist. Of course NO ONE was capable of designing such a thing at the time given the improvement in x-ray technology. However, there was a theory that the Americans were inventing such a weapon spread by certain countries. So the Americans agreed to the Protocol because, quite frankly, they weren’t. So a bunch of humanitarian effort was essentially wasted on something that was never going to exist. Shouldn’t humanitarians concentrate on the “here and now” rather than what is, for the most part conjecture if not make-believe?
I don’t agree with all you’ve said but if I take your terms of reference as a starting point I’d say: sensor-fuzed weapons are here and now, so I think it is a very good time to have a conversation about what “meaningful human control” means if we agree that the principle is worthwhile or at least that a precautionary principle is a good bet.
The Pentagon did more than shrug its collective shoulders and was more than just pretty happy with the text. They drafted it and worked hard to achieve consensus
The tactical value-added of autonomous weapons systems is so vastly greater than that of dumdums or even lasers that I’m not sure they are helpful analogies. Perhaps the best analogy I can think of is chemical weapons, but even then, I see a disparity.
In principle I agree with the statement that “no military technology is inevitable,” Charli, but like others have strong doubts about some of these historical examples. Having done detailed research on the negotiations of the Additional Protocols in the 1970s and studied various states’ confidential positions and tactics, I tend to share the skepticism about the nature/wording of the weapons-related prohibitions and limitations that emerged from them. We need better theory-driven historical work on these processes. I’m working on that.