Every day it seems we hear more about the advancements of artificial intelligence (AI), the amazing progress in robotics, and the need for greater technological improvements in defense to “offset” potential adversaries. When all three of these arguments get put together, there appears to be some sort of magic alchemy that results in widely fallacious, and I would say pernicious, claims about the future of war. Much of this has to do, ultimately, with a misunderstanding about the limitations of technology as well as an underestimation of human capacities. The prompt for this round of techno-optimism debunking is yet another specious claim about how robotic soldiers will be “more ethical” and thus “not commit rape […] on the battlefield.”
There are actually three lines of thought here that need unpacking. The first involves the capabilities of AI with relation to “judgment.” As our above philosopher contends, “I don’t think it would take that much for robot soldiers to be more ethical. They can make judgements more quickly, they’re not fearful like human beings and fear often leads people making less than optional decisions, morally speaking [sic].” This sentiment about speed and human emotion (or lack thereof) has underpinned much of the debate about autonomous weapons for the last decade (if not more). Dr. Hemmingsen’s views are not original. However, such views are not grounded in reality.
Having recently attended a workshop and conference on beneficial artificial intelligence (AI), one of the overriding concerns is how to design beneficial AI. To do this, the AI needs to be aligned with human values, and as such is known, pace Stuart Russell, as the “Value Alignment Problem.” It is a “problem” in the sense that however one creates an AI, the AI may try to maximize a value to the detriment of other socially useful or even noninstrumental values given the way one has to specify a value function to a machine.
This post is a co-authored piece:
Heather M. Roff, Jamie Winterton and Nadya Bliss of Arizona State’s Global Security Initiative
We’ve recently been informed that the Clinton campaign relied heavily on an automated decision aid to inform senior campaign leaders about likely scenarios in the election. This algorithm—known as “Ada”—was a key component, if not “the” component in how senior staffers formulated campaigning strategy. Unfortunately, we know little about the algorithm itself. We do not know all of the data that was used in the various simulations that it ran, or what its programming looked like. Nevertheless, we can be fairly sure that demographic information, prior voting behavior, prior election results, and the like, were a part of the variables as these are stock for any social scientist studying voting behavior. What is more interesting, however, is that we are fairly sure there were other variables that were less straightforward and ultimately led to Clinton’s inability to see the potential loss in states like Wisconsin and Michigan, and almost lose Minnesota.
But to see why “Ada” didn’t live up to her namesake (Ada, Countess of Lovelace, who is the progenitor of computing) is to delve into what an algorithm is, what it does, and how humans interact with its findings. It is an important point to make for many of us trying to understand not merely what happened this election, but also how increasing reliance on algorithms like Ada can fundamentally shift our politics and blind us to the limitations of big data. Let us begin, then, at the beginning.
Last week I was able to host and facilitate a multi-stakeholder meeting of governments, industry and academia to discuss the notions of “meaningful human control” and “appropriate human judgment” as they pertain to the development, deployment and use of autonomous weapons systems (AWS). These two concepts presently dominate discussion over whether to regulate or ban AWS, but neither concept is fully endorsed internationally, despite work from governments, academia and NGOs. On one side many prefer the notion of “control,” and on the other “judgment.”
Yet what has become apparent from many of these discussions, my workshop included, is that there is a need for an appropriate analogy to help policy makers understand the complexities of autonomous systems and how humans may still exert control over them. While some argue that there is no analogy to AWS, and that thinking in this manner is unhelpful, I disagree. There is one unique example that can help us to understand the nuance of AWS, as well how meaningful human control places limits on their use: marine mammal systems .