Tag: autonomous weapons

Analogies in War: Marine Mammal Systems and Autonomous Weapons

Last week I was able to host and facilitate a multi-stakeholder meeting of governments, industry and academia to discuss the notions of “meaningful human control” and “appropriate human judgment” as they pertain to the development, deployment and use of autonomous weapons systems (AWS).  These two concepts presently dominate discussion over whether to regulate or ban AWS, but neither concept is fully endorsed internationally, despite work from governments, academia and NGOs.  On one side many prefer the notion of “control,” and on the other “judgment.”

Yet what has become apparent from many of these discussions, my workshop included, is that there is a need for an appropriate analogy to help policy makers understand the complexities of autonomous systems and how humans may still exert control over them.   While some argue that there is no analogy to AWS, and that thinking in this manner is unhelpful, I disagree.  There is one unique example that can help us to understand the nuance of AWS, as well how meaningful human control places limits on their use: marine mammal systems .

Continue reading

Kill Webs: The Wicked Problem of Future Warfighting

The common understanding in military circles is that the more data one has, the more information one possess.  More information leads to better intelligence, and better intelligence produces greater situational awareness.  Sun Tzu rightly understood this cycle two millennia ago: “Intelligence is the essence in warfare—it is what the armies depend upon in their every move.” Of course, for him, intelligence could only come from people, not from various types of sensor data, such as radar signatures or ship’s pings.

Pursuing the data-information-intelligence chain is the intuition behind the newly espoused “Kill Web” concept.  Unfortunately, however, there is scant discussion about what the Kill Web actually is or entails.  We have glimpses of the technologies that will comprise it, such as integrating sensors and weapons systems, but we do not know how it will function or the scope of its vulnerabilities.

Continue reading

Distance and Death: Lethal Autonomous Weapons and Force Protection

In 1941 Heinrich Himmler, one of the most notorious war criminals and mass murders, was faced with an unexpected problem: he could not keep using SS soldiers to murder the Jewish population because the SS soldiers were  breaking psychologically.   As August Becker, a member of the Nazi gas-vans, recalls,

“Himmler wanted to deploy people who had become available as a result of the suspension of the euthanasia programme, and who, like me, were specialists in extermination by gassing, for the large-scale gassing operations in the East which were just beginning. The reason for this was that the men in charge of the Einsatzgruppen [SS] in the East were increasingly complaining that the firing squads could not cope with the psychological and moral stress of the mass shootings indefinitely. I know that a number of members of these squads were themselves committed to mental asylums and for this reason a new and better method of killing had to be found.”

Continue reading

The Self-Fulfilling Prophecy of High Tech War

 

In fall of 2014, former Defense Secretary Chuck Hagel announced his plan to maintain US superiority against rising powers (i.e. Russia and China). His claim was that the US cannot lose its technological edge – and thus superiority – against a modernizing Russia and a rapidly militarizing China. To ensure this edge, he called for the “third Offset Strategy.”

Continue reading

Resistance is Not Futile.

A claim common among opponents of a treaty ban on autonomous weapon systems (AWS) is that treaties banning weapons don’t work – suggesting efforts to arrest the development of AWS are an exercise in futility. Now this claim has been picked up uncritically by the editors at Bloomberg, writing in the derisively titled, “No Really, How Do We Keep Robots From Destroying Humans?”:

“Bans on specific weapons systems — such as military airplanes or submarines — have almost never been effective in the past. Instead, legal prohibitions and ethical norms have arisen that effectively limit their use. So a more promising approach might be to adapt existing international law to govern autonomous technology — for instance, by requiring that such weapons, like all others, can’t be used indiscriminately or cause unnecessary suffering.”

borgThe editors point out a valid distinction between weapons that are banned outright versus more generic questions of how the use of a specific weapon may or may not be lawful (the principles of proportionality and distinction apply to the use of all weapons). But they also make a conceptual and a causal error, and in so doing woefully underestimate the political power of comprehensive treaty bans. Continue reading

Beyond Robopocalypticism

Media1In all the media frenzy over “killer robots,” Terminator imagery comes up a lot. So do references to Battlestar Galactica. So does a specific scene from Robocop, soon to be remade to resonate with public fears of domestic drones.

These iconic narratives invoke a recurrent theme in American science fiction about lethal robot malfunctions or uprisings against their human creators. So prevalent is this theme in anti-killer-robot media coverage that some have argued concern over autonomous weapons is a product of science fiction itself: Hollywood is apparently to blame for priming the public with an unfounded fear of killer machines. For example Joshua Foust writes:

“Why is there such concern? Part of the reason, arguably, is cultural. American science fiction, in particular, has made clear that autonomous robot are deadly. From the Terminator franchise, the original and the remake of Battlestar Galactica, to the Matrix trilogy, the clear thrust of popular science fiction is that making machines functional without human input will be the downfall of humanity. It is under this sci-fi ‘understanding’ of technology that some object to autonomous weaponry.”

There are several reasons why this sort of argument doesn’t make sense, but one of the most important is that it overstates the case about robopocalypticism in American “killer robot” science fiction. In fact, co-existing with the imagery of killer robots run amok is a broad range of far more benign killer robot imagery that no one seems to mind or even think about when worrying over autonomous weapons. Here are five great examples of killer robots filmmakers and TV producers definitely want you to want on your side in a pinch. [BSG SPOILER ALERT] Continue reading

Friday Nerd Blegging: Science Fiction and Foreign Policy Discourse


Lawfare T2000 from Adama on Vimeo.

The video you see is not just an intriguing and entertaining way to express one position in legal arguments around the debate over autonomous weapons. It represents a fascinating foreign policy artifact, a data point in the policy discourse over the value of a pre-emptive ban on autonomous weapons, one in which science fiction metaphors are given a prime place. This raises intriguing questions about the relationship between science fiction and foreign policy and how we might study it.  Continue reading

The State of the Killer Robot Debate

In a new piece up at Foreign Affairs on the killer robot debate, I attempt to distinguish between what we know and what we can only speculate about around the ethics / legality of autonomous weapons. The gist:

Both camps have more speculation than facts on their side… [But] the bigger problem isn’t that some claims in this debate are open to question on empirical grounds, rather that so many of them simply cannot be evaluated empirically, since there is no data or precedent with which to weigh discrimination and proportionality against military necessity.

So, instead of resting on discrimination and proportionality principles as with earlier weapons ban campaigns, the lethal machines debate is converging around two very different questions. First, in situations of uncertainty, does the burden of proof rest on governments, to show that emerging technologies meet humanitarian standards, or on global civil society, to show that they don’t? And second, even if autonomous systems could one day be shown to be useful and lawful in utilitarian terms, is a deeper underlying moral principle at stake in outsourcing matters of life or death to machines?

The disarmament camp argues yes to both; techno-optimists argue no. To some extent these are questions of values, but each can also be empirically evaluated by the social realities of international normative precedent. In each case, those cautioning against the untrammeled development of unmanned military technology are on firmer ground.

Read the whole thing hereContinue reading

From Kingdonian to Weberian Activism: A Shifting Stance

One of the great things about being done with my book project is that I can begin blogging and writing a little more openly regarding the issues I’ve been tracking empirically over the past seven years, in the same way I have often weighed in publicly on political subjects I’m not studying directly. The latter type of activity is referred to by Patrick Jackson and Stuart Kaufman as “Weberian activism“: informing policy debates by educating stakeholders and the public about the relevant empirical relationships underlying pressing policy decisions and global processes. In my view, this is the gold standard to which academic bloggers and commentators should aspire. Continue reading

The “Fear” Factor in “Killer Robot” Campaigning

robotlogoOne of the more specious criticisms of the “stopkillerrobots” campaign is that it is using sensationalist language and imagery to whip up a climate of fear around autonomous weapons. So the argument goes, by referring to autonomous weapons as “killer robots” and treating them as a threat to “human” security, campaigners manipulate an unwitting public with robo-apocalyptic metaphors ill-suited to a rational debate about the pace and ethical limitations of emerging technologies.

For example, in the run-up to the campaign launch last spring  Gregory McNeal at Forbes opined:

HRW’s approach to this issue is premised on using scare tactics to simplify and amplify messages when the “legal, moral, and technological issues at stake are highly complex.” The killer robots meme is central to their campaign and their expert analysis.

McNeal is right that the issues are complex, and of course it’s true that in press releases and sound-bytes campaigners articulate this complexity in ways designed to resonate outside of legal and military circles (like all good campaigns do), saving more detailed and nuanced arguments for in-depth reporting. But McNeal’s argument about this being a “scare tactic” only makes sense if people are likelier to feel afraid of autonomous weapons when they are referred to as “killer robots.”

Is that true? Continue reading

War Law, the “Public Conscience” and Autonomous Weapons

In the Guardian this morning, Christof Heyns very neatly articulates  some of the legal arguments with allowing machines the ability to target human beings autonomously – whether they can distinguish civilians and combatants, make qualitative judgments, be held responsible for war crimes. But after going through this back and forth, Heyns then appears to reframe the debate entirely away from the law and into the realm of morality:

The overriding question of principle, however, is whether machines should be permitted to decide whether human beings live or die.

But this “question of principle” is actually a legal argument itself, as Human Rights Watch pointed out last November in its report Losing Humanity (p. 34): that the entire idea of out-sourcing killing decisions to machine is morally offensive, frightening, even repulsive, to many people, regardless of utilitarian arguments to the contrary: Continue reading

How Do Americans Feel About Fully Autonomous Weapons?

Opinion_Ideology_AWSAccording to a new survey I’ve just completed, not great. As part of my ongoing research into human security norms, I embedded questions on YouGov’s Omnibus survey asking how people feel about the potential for outsourcing lethal targeting decisions to machines. 1000 Americans were surveyed, matched on gender, age, race, income, region, education, party identification, voter registration, ideology, political interest and military status. Across the board, 55% of Americans opposed autonomous weapons (nearly 40% were “strongly opposed,”) and a majority (53%) expressed support for the new ban campaign in a second question.
Continue reading

Cyber Nerd Blogging: Neuroscience, Conflict and Security

Antoine Bousquet has a fascinating post at Disorder of Things on developments in neuroscience and how they are being used by militaries to 1) enhance their own soldiers and 2) degrade the abilities of their opponents. The post is in response to a report by The Royal Society on Neuroscience, Conflict and Security which outlines these developments, speculates on the future and the ethical implications of these developments.

As Bousquet notes, it’s some pretty hairy stuff:

Yet perhaps the most potentially consequential developments will be found in the area of neural interfacing and its efforts to bring the human nervous system and computing machines under a single informational architecture. The report’s authors note here the benefits that accrue from this research to the disabled in terms of improvements to the range of physical and social interactions available to them through a variety of neurally controlled prosthetic extensions. While this is indeed the case, there is a particular irony to the fact that the war mutilated (which the Afghan and Iraq conflicts have produced in abundance – according to one estimate, over 180,000 US veterans from these conflicts are on disability benefits) have become one of the main testing grounds for technologies that may in the future do much more than restore lost capabilities. Among one of the most striking suggestions is that:

electrode arrays implanted in the nervous system could provide a connection between the nervous system of an able-bodied individual and a specific hardware or software system. Since the human brain can process images, such as targets, much faster than the subject is consciously aware, a neurally interfaced weapons systems could provide significant advantages over other system control methods in terms of speed and accuracy. (p.40)

In other words, human brains may be harnessed within fire control systems to perform cognitive tasks before these even become conscious to them. Aside from the huge ethical and legal issues that it would raise, one cannot but observe that under such a scheme the functional distinction between human operator and machine seems to collapse entirely with the evaporation of any pretense of individual volition.

Noting scientific developments aimed at altering the sensory perception of enemies on the battlefield, Bousquet concludes: “The holy grail of military neuroscience is therefore nothing less than the ability to directly hack into and reprogram a target’s perceptions and beliefs, doing away even with the need for kinetic force. So that when neural warfare does truly arrive, we may not even know it.”

A couple of thoughts:

First, The Royal Society Report is interesting for its inclusion of a relatively decent overview of the applicable law that would apply to such weapons. Ken Anderson at Lawfare disagrees – suggesting that “The legal and ethical issues are of course legion and barely explored.” However, considering the report is relatively brief, the legal and ethical section does proportionally take up a large chunk of it. in addition, the report includes no less than four recommendations for suggesting improvements to the Chemical Weapons Convention and Biological Weapons Convention regimes. Interestingly, they do not suggest any improvements for law of war/IHL as opposed to arms control. I find this surprising to a certain extent. While there are principles that always apply to ALL weaponry (distinction, proportionality and necessity – and, of course, prohibition of unnecessary suffering), I would argue that neuro-non-leathal weapons are a definite grey area. (As The Royal Society report notes, altering someone’s sensory perception has radical implications for notions of responsibility in the prosecution of war crimes.)

Second, Bousquet’s last point is interesting in that it reflects the constant quest over the last century and a half to develop weapons that would end the need for the use of kinetic force. I’m presently reading P.D. Smith’s Doomsday Men a social history of the application of science to warfare and weapons of mass destruction which traces the development and logic behind such weapons that were supposed to be so terrible that they could never be used – or if used, would be so terrible as to inspire an end to warfare. This was the case for chemical/gas weapons and eventually the atomic bomb – the thought behind many of their creators that their mere possession would be enough to stop countries from fighting one another full-stop because the consequences would be so terrible.

As Smith demonstrates in his book, such a theory of non-use of weapons was a frequent theme of the science fiction literature of the time, particularly that of HG Wells:

The United States of America entered World War I under the slogan of ‘the war to end all wars’. Never has idealism been so badly used. From Hollis’ Godfrey’s The Man Who Ended War (1908) to H.G. Wells’s The World Set Free (1914), the idea of fighting a final battle to win universal peace had gripped readers in Europe and America. Wells’s novel even introduced the phrase ‘war that will end war’.
Once again, science played a vital role in these stories. A new figure emerged in pre-war fiction – the saviour scientist, a Promethean genius who uses his scientific knowledge to save his country and banish war forever. It is the ultimate victory for Science and Progress…

As James writes, these works of science fiction promoted the idea that “through revolutionary science and the actions of an idealistic scientist, war could be made a thing of the past.” In some works a terrible war is required to win the peace through science, but it is clear that in the view of many of these pre-War “science romance” novels (which would go on to inspire many of the future atomic scientists working on the nuclear bomb) that super weapons could stop war.

Should we then read neuro-weapons in this light – as part of the constant scientific quest to develop weapons which will end the need to fight?

Robotic Planes: Harbinger of Robotic Weapons?


LA Times‘ latest article on drones raises the spectre of “robot weapons” in relations to the X-47B, Northrup Grumman’s new drone prototype with the ability to fly solo – part of an ongoing force restructuring as the US military cuts back significantly on human personnel.

While one might well ask whether a robotic plane (i.e. one that can fly autonomously) constitutes a robotic weapon if a human is in the loop for any targeting decisions, what’s notable about this narrative in press coverage is that the increasing autonomy of non-lethal systems is certainly being constructed as a harbinger of a slippery slope to a world of fully autonomous weapons systems (AWS). Anti-AWS campaigner Noel Sharkey is quoted in the article:

“Lethal actions should have a clear chain of accountability,” said Noel Sharkey, a computer scientist and robotics expert. “This is difficult with a robot weapon. The robot cannot be held accountable. So is it the commander who used it? The politician who authorized it? The military’s acquisition process? The manufacturer, for faulty equipment?”

And this is the first press coverage I’ve seen that invokes the evolving position of the ICRC on the topic:

“The deployment of such systems would reflect … a major qualitative change in the conduct of hostilities,” committee President Jakob Kellenberger said at a recent conference. “The capacity to discriminate, as required by [international humanitarian law], will depend entirely on the quality and variety of sensors and programming employed within the system.”

Indeed, ICRC President Jakob Kellenberger‘s keynote address during last year’s ICRC meeting on new weapons technologies in San Remo suggest that legal issues pertaining to autonomous weapons are indeed at least percolating on the organization’s internal agenda now, as opposed to previously. Thinking ahead to norm development in this area – the interest of a key player in the arms control regime signals an emerging trend in that direction – it’s worth having a look at the entire relevant text from that speech by Kellenberger:

Automated weapon systems – robots in common parlance – go a step further than remote-controlled systems. They are not remotely controlled but function in a self-contained and independent manner once deployed. Examples of such systems include automated sentry guns, sensor-fused munitions and certain anti-vehicle landmines. Although deployed by humans, such systems will independently verify or detect a particular type of target object and then fire or detonate. An automated sentry gun, for instance, may fire, or not, following voice verification of a potential intruder based on a password.

The central challenge with automated systems is to ensure that they are indeed capable of the level of discrimination required by IHL. The capacity to discriminate, as required by IHL, will depend entirely on the quality and variety of sensors and programming employed within the system. Up to now, it is unclear how such systems would differentiate a civilian from a combatant or a wounded or incapacitated combatant from an able combatant. Also, it is not clear how these weapons could assess the incidental loss of civilian lives, injury to civilians or damage to civilian objects, and comply with the principle of proportionality.

An even further step would consist in the deployment of autonomous weapon systems, that is weapon systems that can learn or adapt their functioning in response to changing circumstances. A truly autonomous system would have artificial intelligence that would have to be capable of implementing IHL. While there is considerable interest and funding for research in this area, such systems have not yet been weaponised. Their development represents a monumental programming challenge that may well prove impossible. The deployment of such systems would reflect a paradigm shift and a major qualitative change in the conduct of hostilities. It would also raise a range of fundamental legal, ethical and societal issues which need to be considered before such systems are developed or deployed. A robot could be programmed to behave more ethically and far more cautiously on the battlefield than a human being. But what if it is technically impossible to reliably program an autonomous weapon system so as to ensure that it functions in accordance with IHL under battlefield conditions?

When we discuss these new technologies, let us also look at their possible advantages in contributing to greater protection. Respect for the principles of distinction and proportionality means that certain precautions in attack, provided for in article 57 of Additional Protocol I, must be taken. This includes the obligation of an attacker to take all feasible precautions in the choice of means and methods of attack with a view to avoiding, and in any event to minimizing, incidental civilian casualties and damages. In certain cases cyber operations or the deployment of remote-controlled weapons or robots might cause fewer incidental civilian casualties and less incidental civilian damage compared to the use of conventional weapons. Greater precautions might also be feasible in practice, simply because these weapons are deployed from a safe distance, often with time to choose one’s target carefully and to choose the moment of attack in order to minimise civilian casualties and damage. It may be argued that in such circumstances this rule would require that a commander consider whether he or she can achieve the same military advantage by using such means and methods of warfare, if practicable.

Three initial reactions, more later as I follow this issue for my book-manuscript-in-progress this Spring:

First, a distinction is being drawn in the legal discourse between “automated” and “autonomous” weapons, suggesting to me that the ICRC sees a soft and hard line here, one that is being obscured in the media and popular discourse. How this will play out in an efforts to apply humanitarian law to these new systems will be interesting to see.

Second, Kellenberger acknowledges the counter-claim that autonomous systems might have advantages from a war law perspective (this argument being put forth most famously by Georgia Tech’s Ronald Arkin). This suggests that the ICRC is far from taking a stance on whether or not these weapons should be pre-emptively banned, as some claim, and as blinding lasers were previously. Instead they are still listening and observing. It will be interesting to see how this debate develops among humanitarian law elites.

Third, I’m glad to see Kellenberger focusing on the question of discrimination, but it should be pointed out that the concept of discrimination in IHL is more than simply about whether distinction between civilians and combatants is possible, but also whether a system is controllable by humans once deployed – whether its effects can be limited. Anti-AWS advocates are certainly making the case that they may not be, and existing humanitarian law provides them some legal leverage to develop that argument if they choose – even if it is shown that such weapons are highly discriminate.

Epistemic Communities and their Discontents

Of note to those following developments in autonomous lethal robots should be an article published this summer in the Columbia Science and Technology Law Review, entitled “International Governance of Autonomous Lethal Robots.” It is co-authored by a bevy of individuals calling themselves the Autonomous Robotics Thrust Group of the Consortium on Emerging Technologies, Military Operations and National Security (CETMONS), a collection of ethics and technology experts from various North American universities. According to the article:

“A variety of never-before-anticipated, complex legal, ethical and political issues have been created – issues in need of prompt attention and action… The recent controversy over unmanned aerial vehicles (UAVs) that are nevertheless human-controlled… demonstrates the importance of anticipating and trying to address in a proactive manner the concerns about the next generation of such weapons – autonomous, lethal robotics. While there is much room for debate about what substantive policies and restrictions (if any) should apply to LARs, there is broad agreement that now is the time to discuss those issues.”

This is only the most recent call for international policy attention to one of the most game-changing developments in military technology and military norms in history. In that, the ARTG joins other emerging networks of professionals bound together by the causal belief that nations will have interests in pursuing fully autonomous weapons and the normative belief that such developments should be subject to ethical regulation in advance – a precautionary principle, as it were. The International Committee on Robot Arms Control (ICRAC), for example, issued a statement in Berlin last year:

“Given the rapid pace of development of armed tele-operated and autonomous robotic systems, we call upon the international community to commence a discussion about the pressing dangers that these systems pose to peace and international security and to civilians, who continue to suffer most in armed conflict.”

At the same time, I see significant differences between the ICRAC statement and the argument in the ARTG article.

First, whereas ICRAC is concerned about the ability of weaponized robots to follow basic war law rules, ARTG suggests that “it may be anticipated that in the future autonomous robots may be able to perform better than humans [with respect to adherence to the existing laws of war].” This is not surprising since one of the ARTG authors is Ronald Arkin, who is pioneering designs for such a ethical soldier and has written an important book on the topic.

Second, whereas ICRAC has floated prohibitions on some or all uses of autonomous robots on a menu of options, ARTG authors argue “it remains an open question whether the differences between LAR and existing military technology are significant enough to bar the former’s use” and moreover appear to assume such prohibitions would not, at any rate, check the deployment of such weapons: “the trend is clear: autonomous robots will ultimately be deployed in the conduct of warfare.” ICRAC’s position is far more optimistic about the potential of norm-building efforts to forestall that outcome, and far more pessimistic about the normative value of the weapons.

In short, both ARTG and ICRAC would appear to constitute examples of epistemic communities:

“networks of professionals with recognized knowledge and skill in a particular issue-area, sharing a set of beliefs, which provide a value-based foundation for the actions of members.”

But do these groups constitute nodes in a single epistemic network due to the shared causal and principled beliefs that the weaponization of robots is proceeding apace and that proactive governance over these developments is now a necessary public good?

Or do they constitute separate, competing epistemic communities operating in the same policy space with very different visions about what that governance should look like? If the latter, do they indeed constitute counter-communities, similar to the counter-campaigns Cliff Bob is documenting in the NGO sector?

Analytically, is there a standard for making this determination as an empirical matter or is it simply a matter of how one black-boxes the emergent norm under study? If I understand the “norm” in question as a precautionary principle in favor of some preliminary ethical discussion about AWS, then both these groups have a shared agenda whatever their different viewpoints on the ethics involved. If I focus on what they argue the outcome should be, my interpretation is that they represent different agendas (that may be true within each group as well, of course, as in any community there will be differences of opinion over outcome, process or strategy).

I put this question forth largely as a bleg since I am not an expert in the epistemic communities literature and yet probably need to become one as I develop this particular case study for my book. Has someone developed a typology that I would find useful? Other thoughts or useful literature you can point me to?

Killer Robot Blogging at Complex Terrain Lab

Complex Terrain Lab’s Symposium on Peter Singer’s Wired for War kicked off today with opening comments by Singer and my post on the politics of global norm construction re. autonomous weapons. They’ve got a fantastic line-up of bloggers over there, including Ken Anderson of Opinio Juris and Matt Armstrong of Mountain Runner, so check it out.

Robot Soldiers v. Autonomous Weapons: Why It Matters

I have a post up right now at Complex Terrain Lab about developments in the area of autonomous weaponry as a response to asymmetric security environments. While fully autonomous weapons are some distance away, a number of researchers and bloggers argue that these trends in military technology have significant moral implications for implementing the laws of war.

In particular, such writers question whether machines can be designed to make ethical targeting decisions; how responsibility for mistakes is to be allocated and punished; and whether the ability to wage war without risking soldiers’ lives will remove incentives at peaceful conflict resolution.

On one side are those who oppose any weapons whose targeting systems don’t include a man (or woman) “in the loop” and indeed call for a global code of conduct regarding such weapons: it was even reported earlier this year that autonomous weapons could be the next target of transnational advocacy networks on the basis of their ethical implications.

On the other side of the debate are roboticists like those at Georgia’s Mobile Robot Lab who argue that machines can one day be superior to human soldiers at complying with the rules of war. After all, they will never panic, succomb to “scenario-fullfillment bias” or act out of hatred or revenge.

Earlier this year,Kenneth Anderson took this debate to a level of greater nuance by asking, at Opinio Juris, how one might program a “robot soldier” to mimic the ideal human soldier. He asks not whether it is likely that a robot could improve upon a human soldiers’ ethical performance in war but rather:

Is the ideal autonomous battlefield robot one that makes decisions as the ideal ethical soldier would? Is that the right model in the first place? What the robot question poses by implication, however, is what, if any, is the value of either robots or human soldiers set against the lives of civilians. This question arises from a simple point – a robot is a machine, and does not have the moral worth of a human being, including a human soldier or a civilian, at least not unless and until we finally move into Asimov-territory. Should a robot attach any value to itself, to its own self preservation, at the cost of civilian collateral damage? How much, and does that differ from the value that a human soldier has?

I won’t respond directly to Anderson’s point about military necessity, with which I agree, or with his broader questions about asymmetric warfare, which are covered at CTLab. Instead, I want to highlight some implications for potential norm development in this area of framing these weapons as analogous to soldiers. As I see it, a precautionary principle against autonomous weapons, if indeed one is warranted, depends quite a great deal on whether we accept the construction of autonomous weapons as “robot soldiers” or whether they remain conceptualized as merely a category of “weapon.”

This difference is crucial because the status of soldiers in international law is quite different from the status of weapons. Article 36 of Additional Protocol 1 requires states to “determine whether a new weapon or method of warfare is compatible with international law” – that is, with the principles of discrimination and proportionality. If a weapon cannot by its very nature discriminate between civilians and combatants, or if its effects cannot be controlled after it is deployed, it does not meet the criteria for new weapons under international law. Adopting this perspective would put the burden of proof on designers of such weapons and gives norm entrepreneurs like Noel Sharkey or Robert Sparrow a framework they can use to argue that such robots could not likely make the kind of difficult judgments necessary in asymmetric warfare to follow existing international law.

But if robots are ever imagined to be analogous to soldiers, then the requirements would be different. Soldiers must only endeavor to discriminate between civilians and combatants and use weapons capable of discriminating. They need not actually do so perfectly, and in fact it is common to argue nowadays that it is almost impossible to do so in many conflict environments. In such cases, the principles of military necessity and proportionality trade off against discrimination. And the fact that soldiers cannot necessarily be “controlled” once they’re deployed doesn’t mitigate against their use, as is the case with uncontrollable weapons like earlier generations of anti-personnel landmines. In such a framework, the argument that robots might sometimes make mistakes doesn’t mean their development itself would necessarily be unethical. All designers would then most likely need to demonstrate is that they are likelier to improve upon human ability.

In other words, framing matters.

© 2017 Duck of Minerva

Theme by Anders NorenUp ↑