One of the more specious criticisms of the “stopkillerrobots” campaign is that it is using sensationalist language and imagery to whip up a climate of fear around autonomous weapons. So the argument goes, by referring to autonomous weapons as “killer robots” and treating them as a threat to “human” security, campaigners manipulate an unwitting public with robo-apocalyptic metaphors ill-suited to a rational debate about the pace and ethical limitations of emerging technologies.
For example, in the run-up to the campaign launch last spring Gregory McNeal at Forbes opined:
HRW’s approach to this issue is premised on using scare tactics to simplify and amplify messages when the “legal, moral, and technological issues at stake are highly complex.” The killer robots meme is central to their campaign and their expert analysis.
McNeal is right that the issues are complex, and of course it’s true that in press releases and sound-bytes campaigners articulate this complexity in ways designed to resonate outside of legal and military circles (like all good campaigns do), saving more detailed and nuanced arguments for in-depth reporting. But McNeal’s argument about this being a “scare tactic” only makes sense if people are likelier to feel afraid of autonomous weapons when they are referred to as “killer robots.”
Is that true? Continue reading
Recent Comments