This is a guest post by Juergen Altmann and Frank Sauer. Juergen Altmann is a Researcher and Lecturer at Technische Universität Dortmund, a specialist in military technology and preventive arms control and among the first scholars to study the military uses of nanotechnology. Frank Sauer is a Senior Research Fellow and Lecturer, Bundeswehr University Munich, a specialist in international security, and is the author of Atomic Anxiety: Deterrence, Taboo and the Non-Use of U.S. Nuclear Weapons.
Autonomous weapon systems: rarely has an issue gained the attention of the international arms control community as quickly as these so-called killer robots. “Once activated, they can select and engage targets without further intervention by a human operator“, according to the Pentagon. They are, judging from the skepticism prevalent in epistemic communities and public opinion alike, a controversial development.
Come next Monday, the United Nations in Geneva will begin its third informal experts meeting on this emerging arms technology. For the third year in a row, various technical, legal and ethical questions surrounding autonomous weapons will be discussed at the UN’s Convention on Certain Conventional Weapons (CCW): Where does autonomy begin, where does meaningful human control end? Can these systems function in compliance with international humanitarian law? Who is accountable if things go awry? Can “outsourcing” kill-decisions to machines be morally acceptable in the first place?
Depending on how CCW States Parties answer these questions, the still nascent social taboo that forbids the use of machines autonomously making kill-decisions might spawn a human security regime and be codified in a CCW protocol. In short, a ban might be in the cards for killer robots.
And in fact, there is an additional set of compelling reasons for preventive arms control that received comparably less attention so far (with notable exceptions, of course): the impact of killer robots on peace and stability.
Stability: not a Cold War relic
Stability became a key notion in Cold War international thought for two reasons. First, the arms race. Arms competition instability exists if the classic dynamic of one side deploying systems which lead adversaries to respond in kind and vice versa goes unchecked, with horizontal and vertical proliferation in tow. Crises were the second reason. Crisis instability exists if there are significant incentives to initiate an attack quickly. These can also arise when (conventional) war is already underway; hastening the escalation to higher levels of conflict, potentially even across the nuclear threshold due to a “use them or lose them”-situation.
The vicious cycle of an uncurbed arms race as well as the dangers of overboiling crises and deterrence failure – backed up by the accidental nuclear war scares caused by early-warning slipups and human error – provided cautionary tales and fueled the strive for stability via arms control during the Cold War, not only in the nuclear but also in the conventional realm with the Conventional Armed Forces in Europe (CFE) Treaty. IR and arms control literature documents these lessons. They carry over to the dawning age of autonomous weapons.
Proliferation and arms race instability
Strictly speaking, autonomous weapons do not exist yet. They are not to be confused with automatic defense systems capable of “firing without a human in the loop”. These are stationary or fixed on ships or trailers and mostly fire at inanimate targets such as incoming munitions. More importantly, they just repeatedly perform pre-programmed actions and operate in a comparably structured and controlled environment.
Autonomous weapon systems, in contrast, would have their own means of propulsion and be able to operate without human control or supervision in dynamic, unstructured, open environments over an extended period of time, potentially learning and adapting their behavior on the go. The military advantages – compared to today’s remotely piloted systems – are obvious. Think future autonomous combat drone sent off to seek, identify, track and attack targets on its own, and you’re spot on. They are called killer robots for a reason.
The drone sector gives an indication of what to expect. Between 2001 and 2015, the number of countries with armed drones has increased from two to ten (add Hamas and Hezbollah to that), and at least 11 countries are currently developing them.
Meanwhile, everything points toward weapon autonomy as the next logical step. The US, with its newly stated third offset strategy explicitly embraces autonomy to achieve military-technological superiority and is consequently leading the way in the air, on the ground, on the sea and below it. And while the US is the only country to have introduced a doctrine for the deployment and use of autonomous weapon systems, claiming restraint, Deputy Secretary of Defense Bob Work just recently stated that the delegation of lethal authority will inexorably happen.
Absent an international ban, one would expect others to follow that lead. After all, who would allow a “killer robot gap”? Especially considering that implementing autonomy in already existing systems in a vibrant ecosystem of unmanned vehicles in various shapes and sizes is not the equivalent of starting a nuclear program from scratch – it’s a technical challenge, yes, but doable, particularly with significant portions of the hard- and software being dual-use. And we are not even considering technology export yet. In short, an unchecked robotics arms race is in the making – with weapons potentially proliferating to everyone, including oppressive regimes and non-state actors.
Crisis escalation and instability
Autonomous weapons are commonly projected as systems of systems operating in swarms. With that in mind, imagine a severe crisis, the swarms of adversaries operating in close proximity of each other. A coordinated attack of one could wipe out the other within missile flight time – that is seconds. The control software would have to react fast in order to use its weapons before they are lost. Sun glint in visual data misinterpreted as a rocket flame, sudden, unforeseen moves of the enemy swarm, a simple software bug could trigger an erroneous “counter”-attack. And while this could happen on a small scale at first, the sequence of events developing from two autonomous systems of systems interacting at rapid speed could never be trained nor tested nor, really, foreseen. The stock market provides cautionary tales of such unforeseeable algorithm interactions. Introducing algorithms in conflict bears an enormous risk of uncontrolled escalation from crisis to war.
In addition, swarms of autonomous weapons would generate new possibilities for disarming surprise attacks. Small, stealthy or extremely low-flying systems are difficult to detect, the absence of a remote-control radio link makes detection even harder. Russia already was not very amused when the idea of using stealthy drones for missile defense was floated in the US. It’s easy to see why. When nuclear weapons or strategic command-and-control systems are, or are perceived to be, put at risk by undetectable swarms that are hard to defend against, autonomous conventional capabilities end up causing instability at the strategic level.
Hitting the brakes
The case of autonomous weapon systems is not one of “we need them because they have them”. After all, no one has them – yet. We would be well-advised to keep it this way. Preventive arms control is prudent. Not only would it curb the looming arms race, a ban would prevent the excessive acceleration of battle that threatens to escape human understanding and the possibility of staying in control during crises. Sometimes humans make mistakes, and humans are slower than machines. But when things threaten to get out of hand, slow is good. That is why we need to hit the brakes now.
0 Comments
Trackbacks/Pingbacks