I have a post up right now at Complex Terrain Lab about developments in the area of autonomous weaponry as a response to asymmetric security environments. While fully autonomous weapons are some distance away, a number of researchers and bloggers argue that these trends in military technology have significant moral implications for implementing the laws of war.
In particular, such writers question whether machines can be designed to make ethical targeting decisions; how responsibility for mistakes is to be allocated and punished; and whether the ability to wage war without risking soldiers’ lives will remove incentives at peaceful conflict resolution.
On one side are those who oppose any weapons whose targeting systems don’t include a man (or woman) “in the loop” and indeed call for a global code of conduct regarding such weapons: it was even reported earlier this year that autonomous weapons could be the next target of transnational advocacy networks on the basis of their ethical implications.
On the other side of the debate are roboticists like those at Georgia’s Mobile Robot Lab who argue that machines can one day be superior to human soldiers at complying with the rules of war. After all, they will never panic, succomb to “scenario-fullfillment bias” or act out of hatred or revenge.
Earlier this year,Kenneth Anderson took this debate to a level of greater nuance by asking, at Opinio Juris, how one might program a “robot soldier” to mimic the ideal human soldier. He asks not whether it is likely that a robot could improve upon a human soldiers’ ethical performance in war but rather:
Is the ideal autonomous battlefield robot one that makes decisions as the ideal ethical soldier would? Is that the right model in the first place? What the robot question poses by implication, however, is what, if any, is the value of either robots or human soldiers set against the lives of civilians. This question arises from a simple point – a robot is a machine, and does not have the moral worth of a human being, including a human soldier or a civilian, at least not unless and until we finally move into Asimov-territory. Should a robot attach any value to itself, to its own self preservation, at the cost of civilian collateral damage? How much, and does that differ from the value that a human soldier has?
I won’t respond directly to Anderson’s point about military necessity, with which I agree, or with his broader questions about asymmetric warfare, which are covered at CTLab. Instead, I want to highlight some implications for potential norm development in this area of framing these weapons as analogous to soldiers. As I see it, a precautionary principle against autonomous weapons, if indeed one is warranted, depends quite a great deal on whether we accept the construction of autonomous weapons as “robot soldiers” or whether they remain conceptualized as merely a category of “weapon.”
This difference is crucial because the status of soldiers in international law is quite different from the status of weapons. Article 36 of Additional Protocol 1 requires states to “determine whether a new weapon or method of warfare is compatible with international law” – that is, with the principles of discrimination and proportionality. If a weapon cannot by its very nature discriminate between civilians and combatants, or if its effects cannot be controlled after it is deployed, it does not meet the criteria for new weapons under international law. Adopting this perspective would put the burden of proof on designers of such weapons and gives norm entrepreneurs like Noel Sharkey or Robert Sparrow a framework they can use to argue that such robots could not likely make the kind of difficult judgments necessary in asymmetric warfare to follow existing international law.
But if robots are ever imagined to be analogous to soldiers, then the requirements would be different. Soldiers must only endeavor to discriminate between civilians and combatants and use weapons capable of discriminating. They need not actually do so perfectly, and in fact it is common to argue nowadays that it is almost impossible to do so in many conflict environments. In such cases, the principles of military necessity and proportionality trade off against discrimination. And the fact that soldiers cannot necessarily be “controlled” once they’re deployed doesn’t mitigate against their use, as is the case with uncontrollable weapons like earlier generations of anti-personnel landmines. In such a framework, the argument that robots might sometimes make mistakes doesn’t mean their development itself would necessarily be unethical. All designers would then most likely need to demonstrate is that they are likelier to improve upon human ability.
In other words, framing matters.
0 Comments