The Value Alignment Problem’s Problem

Having recently attended a workshop and conference on beneficial artificial intelligence (AI), one of the overriding concerns is how to design beneficial AI.  To do this, the AI needs to be aligned with human values, and as such is known, pace Stuart Russell, as the “Value Alignment Problem.”  It is a “problem” in the sense that however one creates an AI, the AI may try to maximize a value to the detriment of other socially useful or even noninstrumental values given the way one has to specify a value function to a machine.

Continue reading

Algorithmic Bias: How the Clinton Campaign May Have Lost the Presidency or Why You Should Care

This post is a co-authored piece:

Heather M. Roff, Jamie Winterton and Nadya Bliss of Arizona State’s Global Security Initiative

We’ve recently been informed that the Clinton campaign relied heavily on an automated decision aid to inform senior campaign leaders about likely scenarios in the election.  This algorithm—known as “Ada”—was a key component, if not “the” component in how senior staffers formulated campaigning strategy.   Unfortunately, we know little about the algorithm itself.  We do not know all of the data that was used in the various simulations that it ran, or what its programming looked like.   Nevertheless, we can be fairly sure that demographic information, prior voting behavior, prior election results, and the like, were a part of the variables as these are stock for any social scientist studying voting behavior.  What is more interesting, however, is that we are fairly sure there were other variables that were less straightforward and ultimately led to Clinton’s inability to see the potential loss in states like Wisconsin and Michigan, and almost lose Minnesota.

But to see why “Ada” didn’t live up to her namesake (Ada, Countess of Lovelace, who is the progenitor of computing) is to delve into what an algorithm is, what it does, and how humans interact with its findings. It is an important point to make for many of us trying to understand not merely what happened this election, but also how increasing reliance on algorithms like Ada can fundamentally shift our politics and blind us to the limitations of big data.   Let us begin, then, at the beginning.

Continue reading

Analogies in War: Marine Mammal Systems and Autonomous Weapons

Last week I was able to host and facilitate a multi-stakeholder meeting of governments, industry and academia to discuss the notions of “meaningful human control” and “appropriate human judgment” as they pertain to the development, deployment and use of autonomous weapons systems (AWS).  These two concepts presently dominate discussion over whether to regulate or ban AWS, but neither concept is fully endorsed internationally, despite work from governments, academia and NGOs.  On one side many prefer the notion of “control,” and on the other “judgment.”

Yet what has become apparent from many of these discussions, my workshop included, is that there is a need for an appropriate analogy to help policy makers understand the complexities of autonomous systems and how humans may still exert control over them.   While some argue that there is no analogy to AWS, and that thinking in this manner is unhelpful, I disagree.  There is one unique example that can help us to understand the nuance of AWS, as well how meaningful human control places limits on their use: marine mammal systems .

Continue reading

Empathy, Envy and Justice: The Real Trouble for Algorithm Bias

Rousseau once remarked that “It is, therefore, very certain that compassion is a natural sentiment, which, by moderating the activity of self-esteem in each individual, contributes to the mutual preservation of the whole species” (Discourses on Inequality).  Indeed, it is compassion, and not “reason” that keeps this frail species progressing.   Yet, this ability to be compassionate, which is by its very nature an other-regarding ability, is (ironically) the different side to the same coin: comparison.  Comparison, or perhaps “reflection on certain relations” (e.g. small/big; hard/soft; fast/slow; scared/bold), also has the different and degenerative features of pride and envy.  These twin vices, for Rousseau, are the root of much of the evils in this world.  They are tempered by compassion, but they engender the greatest forms of inequality and injustice in this world.

Rousseau’s insights ought to ring true in our ears today, particularly as we attempt to create artificial intelligences to overtake or mediate many of our social relations.  Recent attention given to “algorithm bias,” where the algorithm for a given task draws from either biased assumptions or biased training data yielding discriminatory results, I would argue is working the problem of reducing bias from the wrong direction.  Many, the White House included, are presently paying much attention about how to eliminate algorithmic bias, or in some instance to solve the “value alignment problem,” thereby indirectly eliminating it.   Why does this matter?  Allow me a brief technological interlude on machine learning and AI to illustrate why eliminating this bias (a la Rousseau) is impossible.

Continue reading

Improvised Explosive Robots

A common argument made in favor of the use of robotics to deliver (lethal) force is that the violence used is mediated in such a way that it naturally de-escalates a situation.  In some versions, this is due to the fact that the “robot doesn’t feel emotions,” and so is not subject to fear or anger.  In other strands, the argument is that due to distance in time and space, human operators are able to take in more information and make better judgments, including to use less than lethal or nonlethal force.  These debates have, up until now, mostly occurred with regards to armed conflict.  However, with the Dallas police chief’s decision to use a bomb disposal robot to deliver lethal force to the Dallas gunman, we are now at a new dimension of this discussion: domestic policing.

Now, I am not privy to all of the details of the Dallas police force, nor am I going to argue that the decision to use lethal force against Micah Johnson was not justified.  The ethics of self- and other-defense would argue that the Mr. Johnson’s actions and continued posturing of a lethal and imminent threat meant that officers were justified in using lethal force to protect themselves and the wider community.   Moreover, state and federal law allows officers to use “reasonable” amounts of force, and not merely the minimal amount of force to carry out their duties.   Thus I am not going to argue the ethics or the legality of the use of a robot to deliver a lethal blast to an imminent threat.

What is of concern, however, is how the arguments used in favor of increased use of robotics in situations of policing (or war) fail to take into consideration psychological and empirical facts.  If we take these into account, what we might glean is that the trend actually goes in the other direction: that the availability and use of robotics may actually escalate the level of force used by officers.

Continue reading

Kill Webs: The Wicked Problem of Future Warfighting

The common understanding in military circles is that the more data one has, the more information one possess.  More information leads to better intelligence, and better intelligence produces greater situational awareness.  Sun Tzu rightly understood this cycle two millennia ago: “Intelligence is the essence in warfare—it is what the armies depend upon in their every move.” Of course, for him, intelligence could only come from people, not from various types of sensor data, such as radar signatures or ship’s pings.

Pursuing the data-information-intelligence chain is the intuition behind the newly espoused “Kill Web” concept.  Unfortunately, however, there is scant discussion about what the Kill Web actually is or entails.  We have glimpses of the technologies that will comprise it, such as integrating sensors and weapons systems, but we do not know how it will function or the scope of its vulnerabilities.

Continue reading

Adaptability or Compliance? Modular Weapons and the Rules of International Law

 

As many who read this blog will note, I am often concerned with the impact of weapons development on international security, human rights and international law.   I’ve spent much time considering whether autonomous weapons violate international law, or will run us head long into arms races, or will give some incentives to oppress their peoples.   Recently, however, I’ve started to think a bit less about future (autonomous) weapons and a bit more about new configurations of existing (semi-autonomous) weapons, and what those new configurations may portend.   One article that came out this week in Defense One really piqued my interest in this regard: “Why the US Needs More Weapons that can be Quickly and Easily Modified.”

Continue reading

Distance and Death: Lethal Autonomous Weapons and Force Protection

In 1941 Heinrich Himmler, one of the most notorious war criminals and mass murders, was faced with an unexpected problem: he could not keep using SS soldiers to murder the Jewish population because the SS soldiers were  breaking psychologically.   As August Becker, a member of the Nazi gas-vans, recalls,

“Himmler wanted to deploy people who had become available as a result of the suspension of the euthanasia programme, and who, like me, were specialists in extermination by gassing, for the large-scale gassing operations in the East which were just beginning. The reason for this was that the men in charge of the Einsatzgruppen [SS] in the East were increasingly complaining that the firing squads could not cope with the psychological and moral stress of the mass shootings indefinitely. I know that a number of members of these squads were themselves committed to mental asylums and for this reason a new and better method of killing had to be found.”

Continue reading

Autonomous Weapons and Incentives for Oppression

Much of the present debate over autonomous weapons systems (AWS) focuses on their use in war. On one side, scholars argue that AWS will make war more inhumane (Asaro, 2012), that the decision to kill must be a human being’s choice (Sharkey, 2010), or that they will make war more likely because conflict will be less costly to wage with them (Sparrow, 2009). On the other side, scholars argue that AWS will make war more humane, as the weapons will be greater at upholding the principles of distinction and proportionality (Müller and Simpson, 2014), as well as providing greater force protection (Arkin, 2009). I would, however, like to look at different dimension: authoritarian regimes’ use of AWS for internal oppression and political survival.

Continue reading

Strategic Surprise? Or The Foreseeable Future

When the Soviets launched Sputnik in 1957, the US was taken off guard.  Seriously off guard.  While Eisenhower didn’t think the pointy satellite was a major strategic threat, the public perception was that it was.  The Soviets could launch rockets into space, and if they could do that, they could easily launch nuclear missiles at the US.  So, aside from a damaged US ego about losing the “space race,” the strategic landscape shifted quickly and the “missile gap” fear was born.

The US’s “strategic surprise” and the subsequent public backlash caused the US to embark on a variety of science and technology ventures to ensure that it would never face such surprise again.  One new agency, the Advanced Research Projects Agency (ARPA), was tasked with  generating strategic surprise – and guarding against it.  While ARPA changed into DARPA (Defense Advanced Projects Agency) in the 1970s, its mission did not change.

DARPA has been, and still is, the main source of major technological advancement for US defense, and we would do well to remember its primary mission: to prevent strategic surprise.  Why one might ask is this important to the students of international affairs?  Because technology has always been one of the major variables (sometimes ignored) that affects relations between international players.   Who has what, what their capabilities are, whether they can translate those capacities into power, if they can reduce uncertainty and the “fog and friction” of war, whether they can  predict future events, if they can understand their adversaries, and on and on the questions go.  But at base, we utilize science and technology to pursue our national interests and answer these questions.

I recently brought attention to the DoD’s new “Third Offset Strategy” in my last post.  This strategy, I explained, is based on the assumption that scientific achievement and the creation of new weapons and systems will allow the US to maintain superiority and never fall victim to strategic surprise (again).  Like the first and second offsets, the third wants to leverage advancements in physics, computer science, robotics, artificial intelligence and electrical and mechanical engineering to “kick the crap” out of any potential adversary.

Yet, aside from noting these requirements, what exactly, would the US need to do to “offset” the threats from Russia, China, various actors in the Middle East, terrorists (at home and abroad), and any unforeseen or “unknown unknowns?” I think I have a general idea, and if I am at all or even partially correct, we need to have a public discussion about this now.

Continue reading

The Self-Fulfilling Prophecy of High Tech War

 

In fall of 2014, former Defense Secretary Chuck Hagel announced his plan to maintain US superiority against rising powers (i.e. Russia and China). His claim was that the US cannot lose its technological edge – and thus superiority – against a modernizing Russia and a rapidly militarizing China. To ensure this edge, he called for the “third Offset Strategy.”

Continue reading

The Politics of Resettlement: Migrants vs. Refugees

We are witnessing the horror of war. We see it every day, with fresh pictures of refugees risking their lives on the sea, rather than risking death by shrapnel, bombs, assassination or enslavement. For the past four years, over 11 million Syrians have left their homes; 4 million of them have left Syria altogether. Each day thousands attempt to get to a safer place, a better life for themselves and their children. Each day, the politics of resettlement and the fear of terrorism play their part.

The last major resettlement campaign in the US came after the Vietnam War. Over a 20-year period 2 million people from Laos, Cambodia and Vietnam were resettled into the US.   The overall number of resettled refugees from this period is roughly about 3 million. Since the beginning of the civil war in Syria in 2011, Turkey alone has taken 2 million Syrian refugees within its borders. In short, Turkey has absorbed the same amount of war refugees in a four-year period that the US absorbed in five times the amount of time.

Turning to the Syrian case, which has produced the most refugees in any war in the past 70 years, we find a very dismal record of other than near neighbor resettlement. The Syrian conflict began in early 2011, and while the violence quickly escalated, I am taking the numbers of admitted Syrian refugees to the US starting in 2012. In 2012, the US admitted 35 Syrian refugees. In 2013, it admitted 48; in 2014, it admitted 1307. For 2015, the US is estimating admitting somewhere between 1000-2000 refugees. Even Canada, who tends to be more open with regard to resettlement and aid, has only admitted about 1300 refugees, pledging to admit 10,000 more by 2017.  In short, since the beginning of this war, one of the most powerful countries in the world, with ample space and the economic capacity to admit more people, has admitted an estimated total of 2400 people, and its neighbor, a defender of human rights, has admitted about half that. Thinking the other way around, the US has agreed to take in .0006 % of the current population of Syrian refugees, and this number does not does not take into consideration the 7 million internally displaced people of Syria, or the simple fact that one country (Turkey) has absorbed 45%.

Continue reading

One Size Fits Few

With all of the recent essays on the Duck this summer about the job market, citation indexes, and lack of confidence, there seems to be a brewing undercurrent about the anxiety of another academic year. Some of us maybe facing down a PhD defense and the job market for the first time, some of us compiling our pre-tenure review files, and some of us just generally feeling uneasy about a new area of research or a class we’ve never taught. Some maybe anxious about a new job they’ve recently arrived at. I can feel the collective tension reading through the posts and their comments.

I’d like to add one more perspective to the discussion in a hope to ease this tension. Much of what has been said before is from well within the “traditional” view of academia; a view where one has a tenure-track job or where one is attempting to get a tenure-track job. The reality is that getting or keeping these jobs is very difficult, and I cannot rehearse the myriad of factors that go into each. However, what I do think is important to note is that in these previous discussions there is a working assumption that once one is offered or has a tenure-track job, one will do anything to take or keep it. The Holy Grail must be achieved at all costs.

Continue reading

Civil(ian) Military Integration & The Coming Problem for International Law

In late May, the People’s Republic of China (PRC) released a white paper on China’s Military Strategy. This public release is the first of its kind, and it has received relatively little attention in the broader media.   While much of the strategy is of no big surprise (broad and sweeping claims to reunification of Taiwan with mainland China, China’s rights to territorial integrity, self-defense of “China’s reefs and islands,” a nod to “provocative actions” by some of its “offshore neighbors” (read Japan)), there was one part of the strategy that calls for a little more scrutiny: civil-military integration (CMI).

Continue reading

Deterrence in Cyberspace and the OPM Hack

I have yet to weigh in on the recent hack on the Office of Personnel Management (OPM).   Mostly this is due to two reasons.  First is the obvious one for an academic: it is summer! But the second, well, that is due to the fact that as most cyber events go, this one continues to unfold. When we learned of the OPM hack earlier this month, the initial figures were 4 million records. That is, 4 million present and former government employees’ personal records were compromised. This week, we’ve learned that it is more like 18 million.   While some argue that this hack is not something to be worried about, others are less sanguine.   The truth of the matter is, we really don’t know. Coming out on one side or the other is a bit premature.   The hack could be state-sponsored, where the data is squirreled away in a foreign intelligence agency. Or it could be state-sponsored, but the data could be sold off to high bidders on the darknet. Right now, it is too early to tell.

What I would like to discuss, however, is what the OPM hack—and many recent others like the Anthem hack—show in relation to thinking about cybersecurity and cyber “deterrence.”     Deterrence as any IR scholar knows is about getting one’s adversary to not undertake some action or behavior.   It’s about keeping the status quo. When it comes to cyber-deterrence, though, we are left with serious questions about this simple concept. Foremost amongst them is: Deterrence from what? All hacking? Data theft? Infrastructure damage? Critical infrastructure damage? What is the status quo? The new cybersecurity strategy released by the DoD in April is of little help. It merely states that the DoD wants to deter states and non-state actors from conducting “cyberattacks against U.S. interests” (10).   Yet this is pretty vague. What counts as a U.S. interest?

Continue reading

NSA Reform or Foreign Policy Signaling? Maritime Provisions in Title VIII of the USA Freedom Act

With much attention being given to the passage of the 2015 USA Freedom Act, there is some odd silence about what the bill actually contains. Pundits from every corner identify the demise of section 215 of the Patriot Act (the section that permits the government to acquire and obtain bulk telephony meta data). While the bill does in fact do this, now requiring a “specific selection term” to be utilized instead of bulk general trolling, and it hands over the holding of such data to the agents who hold it anyway (the private companies).   Indeed, the new Freedom Act even “permits” amicus curiae for the Foreign Surveillance and Intelligence Court, though the judges of the court are not required to have the curiae present and can block their participation if they deem it reasonable.   In any event, while some ring in the “win” for Edward Snowden and privacy rights, another interesting piece of this bill has passed virtually unnoticed: extending “maritime safety” rights and enacting specific provisions against nuclear terrorism.

Continue reading

It’s the Biggest National Threat and We Can’t Help You

The Department of Defense’s (DoD) new Cyber Strategy is a refinement of past attempts at codifying and understanding the “new terrain” of cybersecurity threats to the United States.   While I actually applaud many of the acknowledgements in the new Strategy, I am still highly skeptical of the DoD’s ability to translate words to deeds. In particular, I am so because the entire Strategy is premised on the fact that the “DoD cannot defend every network and system against every kind of intrusion” because the “total network attack surface is too large to defend against all threats and too vast to close all vulnerabilities (13).

Juxtapose this fact to the statement that “from 2013-2015, the Director of National Intelligence named the cyber threat as the number one strategic threat to the United States, placing it ahead of terrorism for the first time since the attacks of September 11, 2001.” (9).   What we have, then, is the admission that the cyber threat is the top “strategic” –not private, individual or criminal—threat to the United States, and it cannot defend against it. The Strategy thus requires partnerships with the private sector and key allies to aid in the DoD’s fight. Here is the rub though: private industry is skeptical of the US government’s attempt to court it and many of the US’s key allies do not trust much of what Washington says. Moreover, my skepticism is furthered by the simple fact that one cannot read the Strategy in isolation. Rather, one must take it in conjunction with other policies and measures, in particular Presidential Policy Directive 20 (PPD 20), H.R. 1560 “Protecting Cyber Networks Act”, and the sometimes forgotten Patriot Act.

Continue reading

The New Mineshaft Gap: Killer Robots and the UN

This past week I was invited to speak as an expert at the United Nations Informal Meeting of Experts under the auspices of the Convention on Certain Conventional Weapons (CCW). The CCW’s purpose is to limit or prohibit certain conventional weapons that are excessively injurious or have indiscriminate effects. The Convention has five additional protocols banning particular weapons, such as blinding lasers and cluster bombs. Last week’s meetings was focused on whether the member states ought to consider a possible sixth additional protocol on lethal autonomous weapons or “killer robots.”

My role in the meeting was to discuss the military rationale for the development and deployment of autonomous weapons. My remarks here reflect what I said to the state delegates and are my own opinions on the matter. They reflect what I think to be the central tenet of the debate about killer robots: whether states are engaging in an old debate about relative gains in power and capabilities and arms races. In 1964, the political satire Dr. Strangelove finds comedy in that even in the face of certain nuclear annihilation between the US and the former Soviet Union, the US strategic leaders were still concerned with relative disparity of power: the mineshaft gap. The US could not allow the Soviets to gain any advantage in “mineshaft space” – those deep underground spaces where the world’s inhabitants would be forced to relocate to keep the human race alive – because the Soviets would certainly continue an expansionist policy to take out the US’s capability once humanity could emerge safely from nuclear contamination.

Continue reading

Stumbling Through Foreign Policy – Not History

Last week Joe Scarborough from Politico raised the question of why US foreign policy in the Middle East is in “disarray.” Citing all of the turmoil from the past 14 years, he posits that both Obama and Bush’s decisions for the region are driven by “blind ideology [rather] than sound reason.”   Scarborough wonders what historians will say about these policies in the future, but what he fails to realize is that observers of foreign policy and strategic studies need not wait for the future to explain the decisions of the past two sitting presidents.   The strategic considerations that shaped not merely US foreign policy, but also US grand strategy, reach back farther than Bush’s first term in office.

Understanding why George W. Bush (Bush 43) engaged US forces in Iraq is a complex history that many academics would say requires at least a foray into operational code analysis of his decision making (Renshon, 2008).   This position is certainly true, but it too would be insufficient to explain the current strategic setting faced by the US because it would ignore the Gulf War of 1991. What is more, understanding this war requires reaching back to the early 1980s and the US Cold War AirLand Battle strategy.   Thus for us to really answer Scarborough’s question about current US foreign policy, we must look back over 30 years to the beginnings of the Reagan administration.

Continue reading

Not What We Bargained For: The Cyber Problem

Last week the New America Foundation hosted its launch for an interdisciplinary cybersecurity initiative. I was fortunate enough to be asked to attend and speak, but the real benefit was that I was afforded an opportunity to listen to some really remarkable people in the cyber community discuss cybersecurity, law, and war.   I listened to a few very interesting comments. For instance, Assistant Attorney General, John Carlin, claimed that “we” (i.e. the United States) have “solved the attribution problem, and the National Security Agency Director & Cyber Command (CYBERCOM) Commander, Admiral Mike Rogers, say that he will never act outside of the bounds of law in his two roles.   These statements got me to thinking about war, cyberspace and international relations (IR).

In particular, IR scholars have tended to argue over the definitions of “cyberwar,” and whether and to what extent we ought to view this new technology as a “game-changer” (Clarke and Knake 2010; Rid 2011; Stone 2011; Gartzke 2013; Kello 2013; Valeriano and Maness 2015).   Liff (2012), for instance, argues that cyber power is not a “new absolute weapon,” and it is instead beholden to the same rationale of the bargaining model of war. Of course, the problem for Liff is that the “absolute weapon” he utilizes as a foil for cyber weapons/war is not equivalent in any sense, as the “absolute weapon,” according to Brodie, is the nuclear weapon and so has a different and unique bargaining logic unto itself (Schelling 1977). Conventional weapons follow a different logic (George and Smoke 1974).

Continue reading

Older posts

© 2017 Duck of Minerva

Theme by Anders NorenUp ↑