Yes. Only two days after Human Rights Watch launched its “preemptive call” to ban the development and deployment of such systems, the US Defense Department doubled down with a document (shorter version here) that claims:
“Autonomous and semi-autonomous weapons systems shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.”
The Pentagon wants to make perfectly clear that every time one of its flying robots releases its lethal payload, it’s the result of a decision made by an accountable human being in a lawful chain of command. Human rights groups and nervous citizens fear that technological advances in autonomy will slowly lead to the day when robots make that critical decision for themselves. But according to a new policy directive issued by a top Pentagon official, there shall be no SkyNet, thank you very much.
But I’m not sure the document is so clear-cut. The term “appropriate levels of human judgment” leaves a lot open to interpretation, especially since a debate continues to rage about whether humans are in fact inferior judges of when to pull the trigger.
Consider this: by the directive’s own definition such weapons will include not only “human-supervised autonomous weapons” that could be shut down by a human in the event of a weapon system failure, but apparently non-human supervised “autonomous weapons” that would implicitly lack such safeguards. In other words the DOD typology imagines a level of autonomy beyond the “human out of the loop” weapons problematized in Human Rights Watch’s new report. The difference between the two typlogies is that HRW bases its degrees of autonomy on how much capacity humans have to select targets, with “human out of the loop” the point at which machines can select targets without human input. But in the DOD document, degrees of autonomy appear to be based on how much power humans have to intervene, with the ability to intervene at all reserved for “human-supervised” systems, but no requirement that all autonomous systems include such a fail-safe. (In other words, this DefenseNews press release is badly titled to say the least.)
This policy is basically, “Let the machines target other machines; Let men target men.” Since the most compelling arms-race pressures will arise from machine-vs.-machine confrontation, this solution is a thin blanket, but it suggests some level of sensitivity to the issue of robots targeting humans without being able to exercise “human judgment” — a phrase that appears repeatedly in the DoD Directive. This approach seems calculated to preempt the main thrust of HRW’s report, that robots cannot satisfy the principles of distinction and proportionality as required by international humanitarian law, therefore AWS should never be allowed.
But it seems to me that this document aims to provide more ambiguity and flexibility than structure. Proposed limitations and distinctions can be over-ridden by the proper authorities, namely several under-secretaries of defense prior to the development stage. While in theory the policy appears to state that machines will target other machines only, in reality it’s not that simple if the goal is to get around the discrimination / proportionality debate: target a generator and you may kill dialysis patients. Target any particular military objective and you may hit civilians in the area. Since this will be obvious to humanitarian law types, if this document was indeed created to defuse emerging global concern that such systems by their nature may not meet war law standards, I suspect it is unlikely to succeed. Indeed I’m not so sure that is the purpose of the document, since probably the strongest and least ambiguous policy statement is the injunction on p. 11 that:
“Legal reviews of autonomous and semi-autonomous weapon systems [must be] conducted… [to] ensure consistency with all applicable domestic and international law and, in particular, the law of war.”
The law of war, of course, specifies in Article36 of Additional Protocol 1 to the Geneva Conventions entitled “New Weapons”that:
“In the study, development, acquisition or adoption of a new weapon, means or method of warfare, a High Contracting Party is under an obligation to determine whether its employment would, in some or all circumstances, be prohibited by this Protocol or by any other rule of international law applicable to the High Contracting Party.”
and, in Article 50(3c) that indiscriminate attacks are prohibited and include:
“those which employ a method or means of combat the effects of which cannot be limited…”
Which means the question of such weapons’ ability to be used discriminately requires they be used in a manner that can be limited, and that this must be addressed in such reviews if the US is to be in compliance with “the law of war.” I think a strong case could be made that the DOD’s definition of “autonomous weapons” as stated this week would fall outside that definition; quite possibly the authors of the document understand this. Now there’s a catch: the US is not a party to Additional Protocol 1. So a very interesting conversation is now going to begin. At first, it will be about whether the US is nonetheless bound by that rule under customary law, or whether it binds only governments who have signed it. If the US rests its case on its non-party status that will strengthen legal arguments that such a prohibition is implicit for states parties, strengthening the push for a non-use norm (historically the US has tended to follow such norms even when not legally bound). If the US does not so rest its case, then a second interesting conversation will begin, whether the US is obligated to engage in it or not, which is whether any of the weapons systems described as “autonomous” in the report could by their nature be limited in the way described once released from human control with no ability for humans to override; and whether in fact human-supervised autonomous systems would meet this criteria given that they would undoubtedly cause at least incidental human casualties. In other words, if the DoD takes its own directive seriously it will have no choice but to address the arguments being put forth by human rights groups. And given the sudden certainty that these systems are very clearly going to be science fact, and are envisioned to be highly autonomous, the US has probably both strengthened the “killer robot” ban movement’s hand. On the other hand, by suggesting a level of autonomy far beyond what was already feared, it is also shifting the goal posts as the conversation is getting started.
Stay tuned…
Charli, thanks for linking to my analysis, where I do note that Directive provides for full AW that target humans to be allowed given the sigs of 3 USDs plus the JCS Chair, and even includes a page of guidelines for that. However, what is the purpose of this “ambiguity and flexibility” as you call it, if not to at least suggest that the DoD is taking a cautious approach to turning killer robots loose on humans? Having it written this way allows them to deny that the DoD has approved such weapons, although it clearly has not closed the books on them, either.
I think the general approach they are taking is to argue (and “direct”) that all use of AW will take place under “appropriate levels of human judgment” and with human commanders and operators responsible for taking into account the risks to civilians given the limitations of the weapons. This is how they have already, preemptively addressed the arguments based on IHL, including Protocol 1.
They will say that they have no intention of introducing AW into urban combat situations where the ability to distinguish between combatants and noncombatants will pose a problem. They will talk about UAV vs. UAV and naval combat, and ATR systems that hunt rocket launchers and radars and the like. Such systems might be programmed not to attack such targets if any humans are present, or to phone home and ask for permission first.
I agree that the release of the world’s first killer robot policy has suddenly sucked the laughing gas out of the room, but other than that I don’t see how it has strengthened the hand of opponents. Your argument seems to rest on the point that full AW “cannot be limited” in effect, but the DoD will simply deny this is the case, and say instead that it has no intention of using AW except in situations where their possible effects are limited and the risk of their causing unintended harm to civilians are judged by responsible and well-informed human commanders to be proportional to the military objectives to be gained.
I’m imagining how cumbersome the targeting and approval processes will be when these AW systems are to be employed. They will no doubt evolve from our current processes authorizing release of certain munitions from certain systems. I think an understanding of current targeting and approval processes can provide insight into how humans could affect just how autonomous these systems will operate on the battlefield. Sorry; I know that was a bit of a tangent but your piece has me thinking.