This past week I was invited to speak as an expert at the United Nations Informal Meeting of Experts under the auspices of the Convention on Certain Conventional Weapons (CCW). The CCW’s purpose is to limit or prohibit certain conventional weapons that are excessively injurious or have indiscriminate effects. The Convention has five additional protocols banning particular weapons, such as blinding lasers and cluster bombs. Last week’s meetings was focused on whether the member states ought to consider a possible sixth additional protocol on lethal autonomous weapons or “killer robots.”
My role in the meeting was to discuss the military rationale for the development and deployment of autonomous weapons. My remarks here reflect what I said to the state delegates and are my own opinions on the matter. They reflect what I think to be the central tenet of the debate about killer robots: whether states are engaging in an old debate about relative gains in power and capabilities and arms races. In 1964, the political satire Dr. Strangelove finds comedy in that even in the face of certain nuclear annihilation between the US and the former Soviet Union, the US strategic leaders were still concerned with relative disparity of power: the mineshaft gap. The US could not allow the Soviets to gain any advantage in “mineshaft space” – those deep underground spaces where the world’s inhabitants would be forced to relocate to keep the human race alive – because the Soviets would certainly continue an expansionist policy to take out the US’s capability once humanity could emerge safely from nuclear contamination.
Last week’s talks appeared to yet again show states bowing their heads to the same logic regarding a killer robot gap, or so we might think from their reticence to give up any potential military gain—despite the overwhelming wave of caution raised by a myriad of experts from a wide range of disciplines. While the “Doomsday Machine” in Dr. Strangelove prohibited human operators from interfering, thus ensuring mutual destruction, there is a comparable absence of human operators in this present debate. Killer robots are those machines that can detect, select and engage a target without a human operator. The term “autonomous weapons system” is an umbrella term that covers systems in all domains and with a variety of capabilities. Indeed, they may include Samsung’s relatively simple SGR-1 sentry bot that senses objects through heat signatures and can fire on them without a human operator to more complex systems that can identify, track, cue, prioritize, and respond with an appropriate countermeasure munition, such as the Aegis Combat System on various US and allied ships.
The 2015 CCW debate highlighted this wide breadth of technological capability, as well as how far “meaningful human control” (MHC) could be ensured within a system that could potentially operate more quickly than any human warfighter. Several states argued that systems that lacked meaningful human control would violate international humanitarian and international human rights law, and thus ought to be banned or “proscribed” by the CCW. Other states claimed that the technology is still not developed to its full potential, and as such, the world ought to wait until we possess the technology to see the true limits of its use. But this last position is akin to the mineshaft gap rationale. It basically claims that the possessors of the technology now do not want to give it up, and that they want to develop it further to stay ahead of any potential adversary or emerging threat. The only permissible gap is between those who possess it and those who do not, and if some other actor (state or nonstate) were to possess it, we must not let them get ahead! We must not allow a killer robot gap!
While some might think my comparison overblown, it is rooted in the strategic doctrine and history of the Cold War. During the Cold War the US knew that it faced a technologically symmetric but numerically asymmetric adversary. In other words, the former Soviet Union outmatched the US military, and in the event of a war, it would be outmanned and outnumbered. To meet this reality, the US began to focus on technology as the saving factor, as it would provide the “force multiplier” when actual bodies were lacking. Beginning early in the 1980s the Defense Advanced Research Projects Agency (DARPA) began to put together research projects on building a general artificial intelligence (AI) with several military applications. Foremost amongst them were autonomous navigation and vehicles, battle management software, and sensor fusion in weapons systems. Choosing which applications to follow or to propose to the US Congress was a simple matching of goals to the strategic doctrine governing at the time, the AirLand Battle.
Beginning in the early 2000s, however, the US strategic doctrine shifted away from a land-based, symmetric threat, forward presence posture to that of the AirSea concept. US forces realized that they were now facing asymmetric and unpredictable threats, and so they pivoted to a lighter, flexible approach with the AirSea concept. Such a doctrine balances against anti-access/area-denial strategies. Anti-access/area-denial aims to prevent forces from either entering into a theater of operations, or once there, preventing freedom of action in the area under an adversary’s control. In the thirty-odd year history of technological development for artificial intelligence, unmanned or “autonomous” vehicles, battle management software and sensor fusion, the rationale has always and continues to be to utilize one’s technological superiority for strategic, operational and tactical advantage. Secondary goals have been noted as assisting or replacing personnel in decision-making or cognitive or labor-intensive tasks, or to reduce or eliminate personnel exposed to danger and stress while achieving mission objectives. From the military rationale point of view, winning is paramount, force protection and reduction is secondary, and protection of civilians plays little if no part.
This debate, as it has unfolded over the years, is not necessarily new. (The term “killer robots” first appeared in a 1984 Newsweek article, for example.) It is the product of Cold War thinking, and the technologies that we currently have on the battlefield, particularly self-driving cars and remotely piloted vehicles, are the direct results from the DARPA projects in the 80s. If the technology from the 80s-90s is deployed now, what might we think about the next 20-30 year horizon?
This is what some states want to look towards, while others are very keen not to press the question and “let the technology develop.” But if history is to be our guide, what is in research and development now is what will be fighting then. A quick review of DARPA’s project calls shows that collaborative autonomy is on the docket. That is the capability for various autonomous systems to “talk to” and “collaborate with” other systems, both in the same domain or between them, is paramount. As the Collaborative Autonomy in Denied Environment Program (CODE) objective states, “most of the current [unmanned] systems are not well matched to the needs of future conflicts, which DARPA anticipates being much less permissive, very dynamic, and characterized by a higher level of threats, contested electromagnetic spectrum, and re-locatable targets.” To achieve advantage in this environment, that is the anti-access/area denial environment, autonomous systems must identify targets dynamically, transmit information to various other systems, route or guide other systems, and “protect each other by overwhelming defenses and other stratagems.” This last bit is basically an acknowledgement to swarm tactics.
Why might researchers and experts be wary of a technology that promises so much? The answer is the likely arms race and the high probability of proliferation. If Country A produces weapon X, and weapon X promises a strategic advantage, then other countries want it too. Often, history proves that it is powerful states that have this capability to match innovation and production. Yet with every technological iteration another is demanded to keep ahead. (We must not allow a robot gap!)
With proliferation, however, we face two simultaneous concerns. The first is the old tale: out dated technology is likely to be sold off to allies or those with the most money. These are usually larger systems, like fighter jets, tanks, or missiles. But the real worry, at least from my view, is the proliferation of small or “light” autonomous systems. Miniaturization and the swarm tactic means that states will seek smaller and smaller systems that can breach missile or mortar defenses or operate in “contested electromagnetic” environments. Such small systems may offer some operational advantages, but if one or several go astray, are found or stolen (easier to steal an autonomous system the size of your finger or palm than an entire fighter jet), then they can be used by nonstate actors, reverse engineered, mass produced, and proliferate in the same way small arms have proliferated. If this happens, we must be very careful to the end results. We will have states making smarter and smarter and smaller and smaller weapons, and it is unlikely that nonstate actors will be excluded from this marketplace.
While autonomous technologies ought to be pursued in other areas, many experts agreed that it is dangerous to weaponize them. Let us use technology to save life rather than take it, and such saving may require us to look beyond short-term gains in operational or tactical security during war, to long-term and strategic stability for the world.
0 Comments
Trackbacks/Pingbacks