Much of the present debate over autonomous weapons systems (AWS) focuses on their use in war. On one side, scholars argue that AWS will make war more inhumane (Asaro, 2012), that the decision to kill must be a human being’s choice (Sharkey, 2010), or that they will make war more likely because conflict will be less costly to wage with them (Sparrow, 2009). On the other side, scholars argue that AWS will make war more humane, as the weapons will be greater at upholding the principles of distinction and proportionality (Müller and Simpson, 2014), as well as providing greater force protection (Arkin, 2009). I would, however, like to look at different dimension: authoritarian regimes’ use of AWS for internal oppression and political survival.
There are several hypotheticals at work here, but all are not too difficult to support with empirical evidence. First, we could claim that if left unchecked, the development of AWS by modern militaries would likely proliferate throughout the international system. (We have evidence right now that drones are proliferating rapidly, and within twenty years, every country will possess some form of an unmanned (or remotely) piloted vehicle.) In the case of proliferation, one would likely see states who would otherwise be unable to develop AWS on their own, through limitations in research and development or production, procure such systems either openly or via a black market. If this is so, we might see states pursuing procurement for a variety of reasons (status, deterrence, oppression). For the authoritarian regime, there would likely be strong incentives to procure AWS not for use against another state but for domestic use. This is so because authoritarian regimes, which rely on oppression, domination, and intimidation, would see such systems as a cost effective and reliable way to maintain power.
Drawing from work on political survival, we might want to follow Bueno de Mesquita et al and agree that we are confident that leaders survive based on how best they maintain a winning coalition and appease their selectorate (Bueno de Mesquita et. al, 1995, 1997, 1999a, 1999b, 2005), particularly when those leaders have access to free resources or foreign aid (Bueno de Mesquita and Smith, 2010). But what might happen when a political leader was able to diminish the importance of the effects of his winning coalition and selectorate? In other words, we might want to claim that appeasing those individuals who put a leader in power and subsequently maintain that leader’s position in power may no longer be necessary if a leader can cheaply and reliably oppress those individuals rather than providing them with goods.
How might this happen? If a political leader did not require the presence of a large human police force, militia or mercenaries to monitor and oppress, but could automate such oppression, then there would be a serious incentive to pursue such technology. Autonomous weapons linked, for example, to systems that can track individual’s movements, online presence, social media, and/or utilize facial, gesture, or intention recognition, would be able to provide consistent avenues for oppression. Such systems could with the appropriate hardware and software carry out such targeting onboard and learn about new or potential threats. Leaders with access to these types of weapons do not need to pay them rents, and the members of a “winning coalition,” if we were to take this phrase, may also be intimidated into support rather than through the provision of goods.
One might object here and claim that despite technological development, such a situation would be science fiction. Leaders require large bureaucracies and political institutions to survive, and even the presence of AWS cannot change this fact. Institutions are populated by people, and people require some sort of goods provision to support a leader. This is certainly true, but it also misses an important point: we have never yet seen a system of oppression like this. Authoritarian or totalitarian regimes have always relied on human beings to terrorize their populations, and given that humans are easily corruptible, insincere, jealous, and fickle, those systems are always in a precarious balance. Thus in most cases, if not all cases to date, one must satisfy (some) people to maintain power.
Except this may change when one does not need to rely on fickle humans to do one’s bidding. For when one relies on a system of violence, one must maintain the support of those individuals perpetrating the harm. If one does not need to pay them, provide them with status, goods or the like, then one is not in fear of their dissatisfaction. Indeed, with an AWS, one does not need to worry if it will continually support a policy position or ask for a raise or more rents.
One might still counter with the fact that persistent surveillance and terrorizing rarely works for long, and individuals would eventually rise up against such a system. This may certainly happen, but it may not too. Depending upon the type of regime, its access to free resources, foreign aid, and its technological capacities, a leader may be able to survive through the use of automated violence, and even in these restricted cases, the precedent will work to attract others. Of course there is much nuance here, and the empirics may or may not change depending on the case. But we would still do well to remember that debating the (de)merits of autonomous weapons systems requires that we consider domestic matters as much as international ones.
Dr. Roff,
I’m pleased to see you raising this relatively neglected issue, which I suspect may pose more novel challenges than the use of AWS in war between states.
“One might object here and claim that despite technological development, such a situation would be science fiction”
Less distant versions could simply reduce the size of loyalist forces required for regime survival, de Mesquita’s “selectorate.”
For example, suppose that riot control robots require significant human remote supervision. Human operators could be less numerous, more closely supervised (via digital records), less willing to sacrifice themselves in combat, and centralized in a smaller number of locations where a few armed loyalists could manage many unarmed remote operators.
Robotic riot control could also make it easier to suppress protests without providing powerful symbols, e .g. by reducing officer protection thresholds and making more use of nonlethal force.
In the long run, more general AI systems might indeed be capable of fully replacing security forces and insulating authoritarian regimes from challenge.
In democracies, there are issues to investigate in terms of civilian control of military and police forces. What technical and institutional mechanisms will regulate the commands given to automated security forces? There are interesting options for the use of cryptographic methods, but also really difficult challenges for oversight of the programming of very powerful AI systems.