LA Times‘ latest article on drones raises the spectre of “robot weapons” in relations to the X-47B, Northrup Grumman’s new drone prototype with the ability to fly solo – part of an ongoing force restructuring as the US military cuts back significantly on human personnel.
While one might well ask whether a robotic plane (i.e. one that can fly autonomously) constitutes a robotic weapon if a human is in the loop for any targeting decisions, what’s notable about this narrative in press coverage is that the increasing autonomy of non-lethal systems is certainly being constructed as a harbinger of a slippery slope to a world of fully autonomous weapons systems (AWS). Anti-AWS campaigner Noel Sharkey is quoted in the article:
“Lethal actions should have a clear chain of accountability,” said Noel Sharkey, a computer scientist and robotics expert. “This is difficult with a robot weapon. The robot cannot be held accountable. So is it the commander who used it? The politician who authorized it? The military’s acquisition process? The manufacturer, for faulty equipment?”
And this is the first press coverage I’ve seen that invokes the evolving position of the ICRC on the topic:
“The deployment of such systems would reflect … a major qualitative change in the conduct of hostilities,” committee President Jakob Kellenberger said at a recent conference. “The capacity to discriminate, as required by [international humanitarian law], will depend entirely on the quality and variety of sensors and programming employed within the system.”
Indeed, ICRC President Jakob Kellenberger‘s keynote address during last year’s ICRC meeting on new weapons technologies in San Remo suggest that legal issues pertaining to autonomous weapons are indeed at least percolating on the organization’s internal agenda now, as opposed to previously. Thinking ahead to norm development in this area – the interest of a key player in the arms control regime signals an emerging trend in that direction – it’s worth having a look at the entire relevant text from that speech by Kellenberger:
Automated weapon systems – robots in common parlance – go a step further than remote-controlled systems. They are not remotely controlled but function in a self-contained and independent manner once deployed. Examples of such systems include automated sentry guns, sensor-fused munitions and certain anti-vehicle landmines. Although deployed by humans, such systems will independently verify or detect a particular type of target object and then fire or detonate. An automated sentry gun, for instance, may fire, or not, following voice verification of a potential intruder based on a password.
The central challenge with automated systems is to ensure that they are indeed capable of the level of discrimination required by IHL. The capacity to discriminate, as required by IHL, will depend entirely on the quality and variety of sensors and programming employed within the system. Up to now, it is unclear how such systems would differentiate a civilian from a combatant or a wounded or incapacitated combatant from an able combatant. Also, it is not clear how these weapons could assess the incidental loss of civilian lives, injury to civilians or damage to civilian objects, and comply with the principle of proportionality.
An even further step would consist in the deployment of autonomous weapon systems, that is weapon systems that can learn or adapt their functioning in response to changing circumstances. A truly autonomous system would have artificial intelligence that would have to be capable of implementing IHL. While there is considerable interest and funding for research in this area, such systems have not yet been weaponised. Their development represents a monumental programming challenge that may well prove impossible. The deployment of such systems would reflect a paradigm shift and a major qualitative change in the conduct of hostilities. It would also raise a range of fundamental legal, ethical and societal issues which need to be considered before such systems are developed or deployed. A robot could be programmed to behave more ethically and far more cautiously on the battlefield than a human being. But what if it is technically impossible to reliably program an autonomous weapon system so as to ensure that it functions in accordance with IHL under battlefield conditions?
When we discuss these new technologies, let us also look at their possible advantages in contributing to greater protection. Respect for the principles of distinction and proportionality means that certain precautions in attack, provided for in article 57 of Additional Protocol I, must be taken. This includes the obligation of an attacker to take all feasible precautions in the choice of means and methods of attack with a view to avoiding, and in any event to minimizing, incidental civilian casualties and damages. In certain cases cyber operations or the deployment of remote-controlled weapons or robots might cause fewer incidental civilian casualties and less incidental civilian damage compared to the use of conventional weapons. Greater precautions might also be feasible in practice, simply because these weapons are deployed from a safe distance, often with time to choose one’s target carefully and to choose the moment of attack in order to minimise civilian casualties and damage. It may be argued that in such circumstances this rule would require that a commander consider whether he or she can achieve the same military advantage by using such means and methods of warfare, if practicable.
Three initial reactions, more later as I follow this issue for my book-manuscript-in-progress this Spring:
First, a distinction is being drawn in the legal discourse between “automated” and “autonomous” weapons, suggesting to me that the ICRC sees a soft and hard line here, one that is being obscured in the media and popular discourse. How this will play out in an efforts to apply humanitarian law to these new systems will be interesting to see.
Second, Kellenberger acknowledges the counter-claim that autonomous systems might have advantages from a war law perspective (this argument being put forth most famously by Georgia Tech’s Ronald Arkin). This suggests that the ICRC is far from taking a stance on whether or not these weapons should be pre-emptively banned, as some claim, and as blinding lasers were previously. Instead they are still listening and observing. It will be interesting to see how this debate develops among humanitarian law elites.
Third, I’m glad to see Kellenberger focusing on the question of discrimination, but it should be pointed out that the concept of discrimination in IHL is more than simply about whether distinction between civilians and combatants is possible, but also whether a system is controllable by humans once deployed – whether its effects can be limited. Anti-AWS advocates are certainly making the case that they may not be, and existing humanitarian law provides them some legal leverage to develop that argument if they choose – even if it is shown that such weapons are highly discriminate.
0 Comments