NSA Reform or Foreign Policy Signaling? Maritime Provisions in Title VIII of the USA Freedom Act

With much attention being given to the passage of the 2015 USA Freedom Act, there is some odd silence about what the bill actually contains. Pundits from every corner identify the demise of section 215 of the Patriot Act (the section that permits the government to acquire and obtain bulk telephony meta data). While the bill does in fact do this, now requiring a “specific selection term” to be utilized instead of bulk general trolling, and it hands over the holding of such data to the agents who hold it anyway (the private companies).   Indeed, the new Freedom Act even “permits” amicus curiae for the Foreign Surveillance and Intelligence Court, though the judges of the court are not required to have the curiae present and can block their participation if they deem it reasonable.   In any event, while some ring in the “win” for Edward Snowden and privacy rights, another interesting piece of this bill has passed virtually unnoticed: extending “maritime safety” rights and enacting specific provisions against nuclear terrorism.

Continue reading

It’s the Biggest National Threat and We Can’t Help You

The Department of Defense’s (DoD) new Cyber Strategy is a refinement of past attempts at codifying and understanding the “new terrain” of cybersecurity threats to the United States.   While I actually applaud many of the acknowledgements in the new Strategy, I am still highly skeptical of the DoD’s ability to translate words to deeds. In particular, I am so because the entire Strategy is premised on the fact that the “DoD cannot defend every network and system against every kind of intrusion” because the “total network attack surface is too large to defend against all threats and too vast to close all vulnerabilities (13).

Juxtapose this fact to the statement that “from 2013-2015, the Director of National Intelligence named the cyber threat as the number one strategic threat to the United States, placing it ahead of terrorism for the first time since the attacks of September 11, 2001.” (9).   What we have, then, is the admission that the cyber threat is the top “strategic” –not private, individual or criminal—threat to the United States, and it cannot defend against it. The Strategy thus requires partnerships with the private sector and key allies to aid in the DoD’s fight. Here is the rub though: private industry is skeptical of the US government’s attempt to court it and many of the US’s key allies do not trust much of what Washington says. Moreover, my skepticism is furthered by the simple fact that one cannot read the Strategy in isolation. Rather, one must take it in conjunction with other policies and measures, in particular Presidential Policy Directive 20 (PPD 20), H.R. 1560 “Protecting Cyber Networks Act”, and the sometimes forgotten Patriot Act.

Continue reading

The New Mineshaft Gap: Killer Robots and the UN

This past week I was invited to speak as an expert at the United Nations Informal Meeting of Experts under the auspices of the Convention on Certain Conventional Weapons (CCW). The CCW’s purpose is to limit or prohibit certain conventional weapons that are excessively injurious or have indiscriminate effects. The Convention has five additional protocols banning particular weapons, such as blinding lasers and cluster bombs. Last week’s meetings was focused on whether the member states ought to consider a possible sixth additional protocol on lethal autonomous weapons or “killer robots.”

My role in the meeting was to discuss the military rationale for the development and deployment of autonomous weapons. My remarks here reflect what I said to the state delegates and are my own opinions on the matter. They reflect what I think to be the central tenet of the debate about killer robots: whether states are engaging in an old debate about relative gains in power and capabilities and arms races. In 1964, the political satire Dr. Strangelove finds comedy in that even in the face of certain nuclear annihilation between the US and the former Soviet Union, the US strategic leaders were still concerned with relative disparity of power: the mineshaft gap. The US could not allow the Soviets to gain any advantage in “mineshaft space” – those deep underground spaces where the world’s inhabitants would be forced to relocate to keep the human race alive – because the Soviets would certainly continue an expansionist policy to take out the US’s capability once humanity could emerge safely from nuclear contamination.

Continue reading

Stumbling Through Foreign Policy – Not History

Last week Joe Scarborough from Politico raised the question of why US foreign policy in the Middle East is in “disarray.” Citing all of the turmoil from the past 14 years, he posits that both Obama and Bush’s decisions for the region are driven by “blind ideology [rather] than sound reason.”   Scarborough wonders what historians will say about these policies in the future, but what he fails to realize is that observers of foreign policy and strategic studies need not wait for the future to explain the decisions of the past two sitting presidents.   The strategic considerations that shaped not merely US foreign policy, but also US grand strategy, reach back farther than Bush’s first term in office.

Understanding why George W. Bush (Bush 43) engaged US forces in Iraq is a complex history that many academics would say requires at least a foray into operational code analysis of his decision making (Renshon, 2008).   This position is certainly true, but it too would be insufficient to explain the current strategic setting faced by the US because it would ignore the Gulf War of 1991. What is more, understanding this war requires reaching back to the early 1980s and the US Cold War AirLand Battle strategy.   Thus for us to really answer Scarborough’s question about current US foreign policy, we must look back over 30 years to the beginnings of the Reagan administration.

Continue reading

Not What We Bargained For: The Cyber Problem

Last week the New America Foundation hosted its launch for an interdisciplinary cybersecurity initiative. I was fortunate enough to be asked to attend and speak, but the real benefit was that I was afforded an opportunity to listen to some really remarkable people in the cyber community discuss cybersecurity, law, and war.   I listened to a few very interesting comments. For instance, Assistant Attorney General, John Carlin, claimed that “we” (i.e. the United States) have “solved the attribution problem, and the National Security Agency Director & Cyber Command (CYBERCOM) Commander, Admiral Mike Rogers, say that he will never act outside of the bounds of law in his two roles.   These statements got me to thinking about war, cyberspace and international relations (IR).

In particular, IR scholars have tended to argue over the definitions of “cyberwar,” and whether and to what extent we ought to view this new technology as a “game-changer” (Clarke and Knake 2010; Rid 2011; Stone 2011; Gartzke 2013; Kello 2013; Valeriano and Maness 2015).   Liff (2012), for instance, argues that cyber power is not a “new absolute weapon,” and it is instead beholden to the same rationale of the bargaining model of war. Of course, the problem for Liff is that the “absolute weapon” he utilizes as a foil for cyber weapons/war is not equivalent in any sense, as the “absolute weapon,” according to Brodie, is the nuclear weapon and so has a different and unique bargaining logic unto itself (Schelling 1977). Conventional weapons follow a different logic (George and Smoke 1974).

Continue reading

Drones, Decapitation, ISA and Impossible Strategies

Yesterday at ISA, I participated on a panel on technology and international security. One of the topics addressed was the “successfulness” of the Obama administration’s decapitation/targeted killing strategy of terrorist leaders through unmanned aerial vehicles or “drones.” The question of success, however, got me to thinking. Success was described as the military effectiveness of the strikes, but this to me seems rather wrongheaded. For if something is militarily effective, then is so in relation to a military objective.

What is a military objective?   Shortly, those objects that “by their nature, location, purpose or use make an effective contribution to the military action and whose partial or total destruction, capture or neutralization, in the circumstances ruling at the time, offers a definite military advantage.”  One may only target legitimate military objectives with permissible means. But even this requires knowing what the military advantage will be, and as such, requires a clear and identifiable strategy.

Continue reading

The “Right” to Be Forgotten & Digital Leviathans

We hear every day that technology is changing rapidly, and that we are at risk of others violating our rights through digital means.   We hear about cyber attacks that steal data, such as credit card numbers, social security numbers, names, incomes, or addresses. We hear about attacks that steal intellectual property, from movies to plans for the F-35 Joint Strike Fighter. Indeed, we face a continual onslaught from not only the cyber criminals, but from the media as well. One of the lesser-reported issues in the US, however, has been a different discussion about data and rights protection: the right to be forgotten.

Last year, The European Court of Justice ruled in Google vs. Costeja that European citizens have the right, under certain circumstances, to request search engines like Google, to remove links that contain personal information about them. The Court held that in instances where data is “inaccurate, inadequate, irrelevant or excessive” individuals may request the information to be erased and delinked from the search engines. This “right to be forgotten” is a right that is intended to support and complement an individual’s privacy rights. It is not absolute, but must be balanced “against other fundamental rights, such as freedom of expression and of the media” (paragraph 85 of the ruling). In the case of Costeja, he asked that a 1998 article in a Spanish newspaper be delinked from his name, for in that article, information pertaining to an auction of his foreclosed home appeared. Mr. Costeja subsequently paid the debt, and so on these grounds, the Court ruled that the link to his information was no longer relevant. The case did not state that information regarding Mr. Costeja has to be erased, or that the newspaper article eliminated, merely that the search engine result did not need to make this particular information “ubiquitous.” The idea is that in an age of instantaneous and ubiquitous information about private details, individuals have a right to try to balance their personal privacy against other rights, such as freedom of speech. Continue reading

SOTU: Cyber What?

In last night’s State of the Union Address, President Obama briefly reiterated the point that Congress has an obligation to pass some sort of legislation that would enable cybersecurity to protect “our networks”, our intellectual property and “our kids.” The proposal appears to be a reiteration that companies share more information with the government in real time about hacks they are suffering. Yet, there is something a bit odd about the President Obama’s cybersecurity call to arms: the Sony hack.

The public attention given over to the Sony hack, from the embarrassing emails about movie stars, to the almost immediate claims from the Federal Bureau of Investigation (FBI) that the attack came from North Korea, to the handwringing over what kind of “proportional” response to launch against the Kim regime, we have watched the cybersecurity soap opera unfold. In what appears as the finale, we now have reports that the National Security Agency (NSA) watched the attack unfold, and that it was really the NSA’s evidence and not that of the FBI that supported President Obama’s certainty that North Korea, and not some disgruntled Sony employee, was behind the attack. Where does this leave us with the SOTU?

First, if we believe that the NSA watched the Sony attack unfold—and did not warn Sony—then no amount of information sharing from Sony would have mattered.   Sony was de facto sharing information with the government whether they permitted it or not. This raises concerns about the extent to which monitoring foreign attacks violates the privacy rights of individuals and corporations.   Was the NSA watching traffic, or was it inside Sony networks too?

Second, the NSA did not stop the attack from happening. Rather, it and the Obama administration let the political drama unfold, and took the opportunity to issue a “proportionate” response through targeted sanctions against some of the ruling North Korean elite. The sanctions are merely additions to already sanctioned agencies and individuals, and so functionally, they are little more than show.   The only sense that I can make of this is that the administration desired to signal publicly to the Kim regime and all other potential cyber attackers that the US will respond to attacks in some manner. This supports Erik Gartzke’s argument that states do not require 100% certainty about who launched an attack to retaliate. If states punish the “right” actor, then all the better, if they do not, then they still send a deterrent signal to those watching. However, if this is so, it is immediately apparent that Sony was scarified to the cyber-foreign-policy gods, and there was a different cost-benefit calculation going on in the White House.

Finally, let’s get back to the Sony hack and the SOTU address. If the US was taking the Sony hack as an opportunity in deterrence, then this means that it allowed Sony to suffer a series of attacks and did nothing to protect them. If this is the case, then the notion that we need more information sharing with the government may be false.   What the government wants is really more permission, more consent, from the companies it is already watching. Protecting the citizens and corporations of the US requires a delicate balance between privacy and security. However, attempting to corrupt ways of maintaining security, such as outlawing encryption only makes citizens and corporations more unsafe and insecure. If the US government really wants to protect the “kids” from cyber criminals, then they should equip those kids with the strongest encryption there is, and teach good cyber practices.

Autonomous or "Semi" Autonomous Weapons? A Distinction without Difference

Over the New Year, I was fortunate enough to be invited to speak at an event on the future of Artificial Intelligence (AI) hosted by the Future of Life Institute. The purpose of the event was to think through the various aspects of the future of AI, from its economic impacts, to its technological abilities, to its legal implications. I was asked to present on autonomous weapons systems and what those systems portend for the future. The thinking was that an autonomous weapon is, after all, one run on some AI software platform, and if autonomous weapons systems continue to proceed on their current trajectory, we will see more complex software architectures and stronger AIs.   Thus the capabilities created in AI will directly affect the capabilities of autonomous weapons and vice versa. While I was there to inform this impressive gathering about autonomous warfare, these bright minds left me with more questions about the future of AI and weapons.

First, autonomous weapons are those that are capable of targeting and firing without intervention by a human operator. Presently there are no autonomous weapons systems fielded. However, there are a fair amount of semi-autonomous weapons systems currently deployed, and this workshop on AI got me to thinking more about the line between “full” and “semi.” The reality, at least the way that I see it, is that we have been using the terms “fully autonomous” and “semi-autonomous” to describe the extent to which the different operational functions on a weapons system are all operating “autonomously” or if only some of them are. Allow me to explain.

We have roughly four functions on a weapons system: trigger, targeting, navigation, and mobility. We might think of these functions like a menu that we can order from. Semi-autonomous weapons have at least one, if not three, of these functions. For instance, we might say that the Samsung SGR-1 has an “autonomous” targeting function (through heat and motion detectors), but is incapable of navigation, mobility or triggering, as it is a sentry-bot mounted on a defensive perimeter.   Likewise, we would say that precision guided munitions are also semi-autonomous, for they have autonomous mobility, triggering, and in some cases navigation, while the targeting is done through a preselected set of coordinates or through “painting” a target through laser guidance.

Where we seem to get into deeper waters, though, are in the cases of “fire and forget” weapons, like the Israeli Harpy, the Raytheon Maverick heavy tank missile, or the Israeli Elbit Opher. While these systems are capable of autonomous navigation, mobility, trigger and to some extent targeting, they are still considered “semi-autonomous” because the target (i.e. a hostile radar emitter or the infra-red image of a particular tank) was at some point pre-selected by a human. The software that guides these systems is relatively “stupid” from an AI perspective, as it is merely using sensor input and doing a representation and search on the targets it identifies.   Indeed, even Lockheed Martin’s L-RASM (long-range anti-ship missile), appears to be in this ballpark, though it is more sophisticated because it can select its own target amongst a group of potentially valid targets (ships). The question has been raised whether this particular weapon slides from semi-autonomous to fully autonomous, for it is unclear how (or by whom) the decision is made.

The rub in the debate over autonomous weapons systems, and from what I gather, some of the fear in the AI community, is the targeting software. How sophisticated that software needs to be to target accurately, and, what is more, to target objects that are not immediately apparent as military in nature.   Hostile radar emitters present little moral qualms, and when the image recognition software used to select a target is relying on infra-red images of tank tracks or ship’s hulls, then the presumption is that these are “OK” targets from the beginning. I have two worries here. First, is that from the “stupid” autonomous weapons side of things, military objects are not always permissible targets. Only by an object’s nature, purpose, location, use, and effective contribution can one begin to consider it a permissible target. If the target passes this hurdle, one must still determine whether attacking it provides a direct military advantage. Nothing in the current systems seems to take this requirement into account, and as I have argued elsewhere, future autonomous weapons systems would need to do so.

Second, from the perspective of the near term “not-so-stupid” weapons, at what point would targeting human combatants come into the picture? We have AI presently capable of facial recognition with almost near accuracy (just upload an image to Facebook to find out). But more than this, current leading AI companies are showing that artificial intelligence is capable of learning at an impressively rapid rate. If this is so, then it is not far off to think that militaries will want some variant of this capacity on their weapons.

What then might the next generation of “semi” autonomous weapons look like, and how might those weapons change the focus of the debate? If I were a betting person, I’d say they will be capable of learning while deployed, use a combination of facial recognition and image recognition software, as well as infra-red and various radar sensors, and they will have autonomous navigation and mobility. They will not be confined to the air domain, but will populate maritime environments and potentially ground environments as well. The question then becomes one not solely of the targeting software, as it would be dynamic and intelligent, but on the triggering algorithm. When could the autonomous weapon fire? If targeting and firing were time dependent, without the ability to “check-in” with a human, or let’s say, that there were just too many of these systems deployed that “checking-in” were operationally infeasible due to band-width, security, and sheer man-power overload, how accurate would the systems have to be to be permitted to fire? 80%? 50%? 99%? How would one verify that the actions taken by the system were in fact in accordance with its “programming,” assuming of course that the learning system doesn’t learn that its programming is hamstringing it to carry out its mission objectives better?

These pressing questions notwithstanding, would we still consider a system such as this “semi-autonomous?” In other words, the systems we have now are permitted to engage targets – that is target and trigger – autonomously based on some preselected criteria. Would these systems that utilize a “training data set” to learn from likewise be considered “semi-autonomous” because a human preselected the training data? Common sense would say “no,” but so far militaries may say “yes.”   The US Department of Defense, for example, states that a “semi-autonomous” weapon system is one that “once activated, is intended only to engage individual targets or specific target groups that have been selected by a human operator” (DoD, 2012). Yet, at what point would we say that “targets” are not selected by a human operator? Who is the operator? The software programmer with the training data set can be an “operator,” and the lowly Airman likewise can be an “operator” if she is the one ordered to push a button, so too can the Commander who orders her to push it (though, the current DoD Directive makes a distinction between “commander” and “operator” which problematizes the notion of command responsibility even further). The only policy we have on autonomy does not define, much to my dismay, “operator.” This leaves us in the uncomfortable position that distinction between autonomous and semi-autonomous weapons is one without difference, and taken to the extreme would mean that militaries would now only need to claim their weapons system is “semi-autonomous,” much to the chagrin of common sense.

Citizens, Beasts or Gods?

Keeping up with the current engagement with artificial intelligence (AI) is a full time task. Today in the New York Times, two (here and here) lead articles in the technology section were about AI, and another discussed the future of robotics and AI and the workforce.   As I blogged last week, the coming economic future of robotics and AI is going to have to contend with some very weighty considerations that are making our society more and more economically, socially and racially divided.

Today, however, I’d like to think about how a society might view an AI, particularly one that is generally intelligent and economically productive.   To aid in this exercise I am employing one of my favorite and most helpful philosophers: Aristotle. For Aristotle man is the only animal that possesses logos. Logos, is the ability to use speech and reason. While other philosophers have challenged this conclusion, let’s just take Aristotle at his word.

Logos is what defines a human as a human, and because of Aristotle’s teleology, the use of logos is what makes a human a good human (Ethics, 1095a). Moreover, Aristotle also holds that man is by nature a political animal (Ethics 1097b, Politics, 1253a3). What he means by this is that man cannot live in isolation, and cannot be self-sufficient in isolation, but must live amongst other humans. The polis for him provides all that is necessary and makes life “desirable and deficient in nothing” (Ethics, 1097b).   If one lives outside of the polis, then he is doing so against his nature. As Aristotle explains, “anyone who lacks the capacity to share in community, or has not the need to because of his [own] self-sufficiency, is no part of the city and as a result is either a beast or a god” (Politics, 1253a29). In short, there are three classes of persons in Aristotle’s thinking: citizens, beasts or gods.

Citizens share in community, and according to his writings on friendship, they require a bond of some sort to hold them together (Ethics, 1156a). This bond, or philia, is something shared in common. Beasts, or those animals incapable of logos, cannot by definition be part of a polis, for they lack the requisite capacities to engage in deliberative speech and judgment.   Gods, for Aristotle, also do not require the polis for they are self-sufficient alone. Divine immortals have no need of others.

Yet, if we believe that AI is (nearly) upon us, or at least is worth commissioning a 100 year study to measure and evaluate its impact on the human condition, we have before us a new problem, one that Aristotle’s work helps to illuminate. We potentially have an entity that would possess logos but fail to be a citizen, a beast or a God.

What kind of entity is it? A generally artificially intelligent being would be capable of speech of some sort (that is communication), it could understand other’s speech (in either voice recognition or text), it would be capable of learning (potentially at very rapid speeds), and depending upon its use or function, it is could be very economically productive for the person or entity that owns it.   In fact, if we were to rely on Aristotle, this entity looks more akin to slaves. Though even this understanding is incomplete, for his argument is that the master and slave are mutually beneficial in their relationship, and that a slave is nothing more than “a tool for the purpose of life.”   Of course nothing in a relationship between an AI and its “owners” would make the relationship “beneficial” for the AI. Unless one viewed it as possible as giving an AI a teleological value structure that placed “benefit” as that which is good for its owner.

If we took this view, however, we would be granting that an AI will never really understand us humans.   From an Aristotelian perspective, what this means is that we would create machines that are generally intelligent and give them some sort of end value, but we would not “share” anything in common with the AI. We would not have “friendship;” we would have no common bond. Why does this matter, the skeptic asks?

It matters for the simple reason that if we create a generally intelligent AI, one that can learn, evolve and potentially act on and in the world, if it has no philia with us humans, we cannot understand it and it cannot understand us. So what, the objection goes. As long as it is doing what it is programmed to do, all the better for us.

I think this line of reasoning misses something fundamental about creating an AI. We desire to create an AI that is helpful or useful to us, but if it doesn’t understand us, and we fail to see how it is completely nonhuman and will not think or reason like a human, it might be “rational” but not “reasonable.” We would embark on creating a “friendly AI” that has no understanding of what “friend” means, or hold anything in common with us to form a friendship. The perverse effects of this would be astounding.
I will leave you with one example, and one of the ongoing problems of ethics. Utilitarians view ethics as a moral framework that states that one must maximize some sort of nonmoral good (like happiness or well-being) for one’s action to be considered moral. Deontologists claim that no amount of maximization will justify the violation of an innocent person’s rights. When faced with a situation where one must decide what value to ascribe to, ethicists hotly debate which moral framework to adopt. If an AI programmer says to herself, “well, utilitarianism often yields perverse outcomes, I think I will program a deontological AI,” then, the Kantian AI will ascribe to a strict deontic structure. So much so, that the AI programmer finds herself in another quagmire. The “fiat justitia ruat caelum” problem (let justice be done though the heavens fall), where rational begins to look very unreasonable.   Both moral theories ascribe and inform different values in our societies, but both are full of their own problems. There is no consensus on which framework to take, and there is no consensus on the meanings of the values within each framework.

My point here is that Aristotle rightly understood that humans needed each other, and that we must educate and habituate them to moral habits. Politics is the domain where we discuss and deliberate and act on those moral precepts, and it is what makes us uniquely human. Creating an artificial intelligence that looks or reasons nothing like a human carries with it the worry that we have created a beast or a god, or something altogether different. We must tread carefully on this new road, fiat justitia

What Does the Rise of AI have to do with Ferguson and Eric Garner?

One might think that looking to the future of artificial intelligence (AI) and the recent spate of police brutality against African American males, particularly Michael Brown and Eric Garner, are remotely related if they are related at all. However, I want to press us to look at two seemingly unrelated practices (racial discrimination and technological progress) and look to what the trajectory of both portend. I am increasingly concerned about the future of AI, and what the unintended consequences that increased reliance on it will yield. Today, I’d like to focus on just one of those unintended outcomes: increased racial divide, poverty and discrimination.

Recently, Stephen Hawking, Elon Musk, and Nick Bostrom have argued that the future of humanity is at stake if we create artificial general intelligence (AGI), for that will have a great probability of surpassing its general intelligence to that of a superintelligence (ASI). Without careful concern for how such AIs are programmed, that is their values and their goals, ASI may begin down a path that knows no bounds. I am not particularly concerned with ASI here. This is an important discussion, but it is one for another day. Today, I’m concerned with the AI in the near to middle term that will inevitably be utilized to take over rather low skill and low paying jobs. This AI is thought by some as the beginning as “the second machine age” that will usher in prosperity and increased human welfare. I have argued before that I think that any increase in AI will have a gendered impact on job loss and creation.   I would like to extend that today to concerns over race.

Today the New York Times reported that 30 million Americans are currently unemployed, and of that 30 million, the percentage of unemployed men has tripled (since 1960). The article also reported that 85% of unemployed men polled do not possess bachelors degrees, and 34% have a criminal background. In another article, the Times also broke down unemployment rates nationally, looking at the geographic distribution of male unemployment. In places like Arizona and New Mexico, for instance, large swaths of land have 40% or more rates of unemployment. Yet if one examines the data more closely, one sees an immediate overlay of unemployment rates on the same tracts of land that are designated tribal areas and reservations, i.e nonwhite areas.

Moreover, if one looks at the data supported by the Pew Institute, the gap between white and minority household income continues to grow. Pew reports that in 2013, the median net worth of white households was $141,000. The median net worth of black households was $11,000. This is a 13X difference. Minority households, the data says, are more likely to be poor. Indeed, they are likely to be very poor. For the poverty level in the US for a household containing one person is $11,670.   Given the fact that Pew also reported that 58% of unemployed women reported taking care of children 18 and younger in their home, there is a strong probability that these households contain more than one person. Add these facts regarding poverty and unemployment an underlying racial discrimination in the criminal justice system, and one can see where this going.

While whites occupy better jobs, have better access to education, have far greater net household incomes, they are far less likely to experience crime. In fact, The Sentencing Project reports that in 2008 blacks were 78% more likely to be victims of burglary and 133% more likely to experience other types of theft. Compare this to the 2012 statistic that blacks are also 66% more likely to be victims of sexual assault and over six times more likely than whites to be victims of homicide.   Minorities are also more often seen to be the perpetrators of crime, and as one study shows, police officers are quicker to shoot at armed black suspects than white ones.

Thus what we see from a very quick and rough look at various types of data is that poverty, education, crime and the justice system are all racially divided. How does AI affect this? Well, the arguments for AI and for increasingly relying on AI to generate jobs and produce more wealth and prosperity are premised on this racist (and gendered) division of labor.  As Byrnjolfsson and McAffee argue, the jobs that are going to “disappear” are the “dull” ones that computers are good at automating, but jobs that require dexterity and little education – like housecleaning – are likely to stay. Good news for all those very wealthy (and male) maids.

In the interim, Brynjolfsson and McAffee suggest, there will be a readjustment of the types of jobs in the future economy. We will need to have more people educated in information technologies and in more creative ways to be entrepreneurs in this new knowledge economy.   They note that education is key to success in the new AI future, and that parents ought to look to different types of schools that encourage creativity and freethinking, such as Montessori schools.

Yet given the data that we have now about the growing disparity between white and minority incomes, the availability of quality education in poor areas, and the underlying discriminatory attitudes towards minorities, in what future will these already troubled souls rise to levels of prosperity that automatically shuts them out of the “new” economy? How could a household with $11,000 annual income afford over $16,000 a year in Montessori tuition? Trickle-down economics just doesn’t cut it. Instead, this new vision of an AI economy will reaffirm what Charles Mill calls “the racial contract”, and further subjugate and discriminate against nonwhites (and especially nonwhite women).

If the future looks anything like what Brynjolfsson and McAffee portend, then those who control AI, will be those who own and thus benefit from lower costs of production through the mechanization and automation of labor.   Wealth will accumulate into these hands, and unless one has skills that either support the tech industry or create new tech, then one is unlikely to find employment in anything other than unskilled but dexterous labor.   Give the statistics that we have today, it is more likely that this wealth will continue to accumulate into primarily white hands.   Poverty and crime will continue to be placed upon the most vulnerable—and often nonwhite—in society, and the racial discrimination that perpetuates the justice system, and with it tragedies like those of Eric Gardner will continue. Unless there is a serious conversation about the extent to which the economy, the education system and the justice system perpetuates and exacerbates this condition, AI will only make these moral failings more acute.

Meaningful or Meaningless Control

In May of 2014, the United Nations Convention on Conventional Weapons (CCW) first considered the issue of banning lethal autonomous weapons. Before the start of the informal expert meetings, Article 36 circulated a memorandum on the concept of “meaningful human control.” The document attempted to frame the discussion around the varying degrees of control over increasingly automated (and potentially autonomous) weapons systems in contemporary combat. In particular, Article 36 posed the question as one about the appropriate balance of control over a weapons system that can operate independently of an operator in a defined geographical area for a particular period of time ought to be. Article 36 does not define “meaningful control,” but rather seeks to generate discussion about how much control ought to be present, what “meaningful” entails, and how computer programming can enable or inhibit human control. The state parties at the CCW agreed that this terminology was crucial and that no weapons systems that lacked meaningful human control ought to be deployed. The Duck’s Charli Carpenter has written about this as well, here.

Last month in October, the United Nations Institute for Disarmament Research (UNIDIR) held a conference on the concept of meaningful human control.   Earlier this month, states again convened in Geneva at another CCW meeting and agreed to further consider the matter in April of 2015. Moreover, other civil society groups are also now beginning to think about what this approach entails.   It appears, then, that this concept has become a rallying point in the debate over autonomous weapons. Yet while we have a common term with which to agree, we are not clear on what exactly “control” requires, or what proxy’s we could utilize to make control more efficacious, such as geographic or time limits, or what “meaningful” would look like.

Today, I had an engaging discussion with a colleague on a “semi-autonomous” weapon: Lockheed Martin’s Long-Range Anti-Ship Missile (LRASM). One colleague claimed that this missile is in fact an autonomous weapon, as it selects and engages a target. Another colleague, however, claimed that this was not an autonomous weapon because a human being preselects the targets before launching the weapon. Both my colleagues are correct. Yet how can this be so?

The weapon does select and an engage a target after it is launched, and the particular nature of the LRASM is that it can navigate in denied environments where other weapons cannot. It can change course when necessary, and when it finds its way to its preselected targets, it engages in a selection these targets based upon an undisclosed identification mechanism (probably similar in image recognition to other precision guided munitions). LRASM is unique in its navigation and target cuing capabilities, as well as its ability to coordinate with other launched LRASMs. Thus the question about whether it is an autonomous weapon, then, is really a question about meaningful human control.

Is it a question about “control” once the missile reaches its target destination and then “decides” which ship amongst the convoy it will attack? Or is it a question about the selection of the grid or space that the enemy convoy occupies? At what point is the decision about “control” to be made?

I cannot here answer fully this question. However, I can raise two potential avenues for the way forward. One is to consider human control not in terms of a dichotomy (there is either a human being deliberating at every juncture and pulling a trigger or there is not), but in terms of an escalatory ladder. That is, we start with the targeting process, from the commander all the way to a targeteer or weaponeer, and examine how decisions to use lethal force are made and on what basis.   This would at least allow us to understand the different domains (air, land, sea) that we are working within, the types of targets likely found, and the desired goals to be achieved. It would also allow examination of when particular weapons systems enter the discussion. For if we have an understanding of what types of decisions, from various (perhaps automated) types of information, are made along this ladder, then we can determine whether some weapons are appropriate or not. We might even glean what types of weapons are always out of bounds.

Second, if this control ladder is too onerous a task, or perhaps too formulaic and would induce a perverse incentive to create weapons right up to a particular line of automation, then perhaps the best way to think about what “meaningful human control” entails is not to think about its presence, but rather its absence. In other words, what would “meaningless” human control look like? Perhaps it is better to define the concept negatively, by what it is not, rather than what it is. We have examples of this already, particularly with the US’s policy regarding covert action. The 1991 Intelligence Authorization Act defines covert action very vaguely, and then in more concrete terms defines what it is not (e.g. intelligence gathering, traditional or routine military or diplomatic operations, etc.). Thus clear cases of “meaningless” would be to launch a weapon system without undertaking any consideration of the targets, the likely consequences, and the presence of civilian objects or persons. Or to launch a weapon that perpetually patrols. This is of course cold comfort to those who want to ban autonomous weapons outright. Banning weapons would of course require a positive and not negative definition.

States would have to settle the question of whether any targets on a grid are fair game, or if only pre-identified targets on a grid – and not targets of opportunity- are fair game. It may also require states to become transparent about how such targets on a grid are confirmed, or how large a grid one is allowed to use. For if a search area ends up looking like the entire Pacific Ocean, that pesky question about “meaningful” raises its head again.

Privacy, Secrecy & War: Emperor Rogers and the Failure of NSA Reform

On November 3, Britain’s head of the Government Communications Headquarters (GCHQ) published an opinion piece in the Financial Times, noting that technology companies, such as Twitter, Facebook, WhatsApp, (and implying Google and Apple), ought to comply with governments to a greater extent to combat terrorism. When tech companies further encrypt their devices or software, such as what Apple has recently released with the iPhone 6, or what WhatsApp has accomplished with its software, GCHQ chief Hannigan argues that this is tantamount to aiding and abetting terrorists. GCHQ is the sister equivalent of the US’s National Security Agency (NSA), as both are charged with Signals Intelligence and information assurance.

Interestingly, Hannigan’s opinion piece comes only weeks before the US Senate voted on whether to limit the NSA’s ability to conduct bulk telephony meta-data collection, as well as reform aspects of the NSA’s activities. Two days ago, this bill, known as the “USA Freedom Act,” failed to pass by two votes. While Hannigan stressed that companies ought to be more open to compliance with governments’ requests to hand over data, the failure of the USA Freedom Act strengthened at least the US government’s position to continue is mass surveillance of foreign and US citizens.  It remains to be seen how the tech giants will react.

In the meantime, the bill also sought, amongst other things, to make transparent the amount of requests from governments to tech companies, to force the NSA to seek a court order from the Foreign Intelligence Surveillance Court (FISC) to query the (telecom held) data, and to require the NSA to list the “specific selection term” to be used while searching the data. Moreover, the bill would have also mandated an amicus curiae, or “friend of the court,” in the FISC to offer arguments against government requests for searches, data collection and the like, which it currently lacks. Much of these reforms were welcomed by tech companies like Google and Apple and also were suggested in a 2013 report for the White House on NSA and intelligence reform.

Many of the disagreements over the bill arose on two lines: that the bill hamstringed the US’s ability to “fight terrorists,” and that the bill failed to go far enough in protecting the civil liberties of US citizens. This was because the bill would have reauthorized Section 215 of the PATRIOT Act (set to end in 2015) to 2017. Section 215 permits government agents, such as the FBI and the NSA to compel third parties to hand over business records and any “other tangible objects” whenever the government requests them in the pursuance of an “authorized investigation” against international terrorism or clandestine intelligence activities. In particular, Section 215 merely requires the government to present specific facts that would support a “reasonable suspicion” that the person under investigation is in fact an agent of a foreign power or a terrorist. It does not require a showing of probable cause, only a general test of reasonableness, and this concept of reasonableness is stretched to quite a limit.   The democratic support for the bill comes most strongly from Senator Dianne Feinstein (D- Calif), who is reported to have said, “I do not want to end the program [215 bulk collection],” so “I’m prepared to make the compromise, which is that the metadata will be kept by the telecoms.”

Where, then does the failure of this bill leave us? In two places, actually. First, it permits the NSA to run along on with the status quo. Edward Snowden’s revelations of mass surveillance appear to have fallen off of the American people’s radar, and with it, permitted Congress to punt on the issue until its next session. Moreover, given that the next session is a Republican dominated House and Senate, there is high probability that any bill passed will either reaffirm the status quo (i.e. reauthorize Section 215) or potentially strengthen the NSA’s abilities to collect data.

Second, this state of affairs will undoubtedly strengthen the position of Emperor Mike Rogers. Admiral Mike Rogers is the recent replacement of General Keith Alexander, the head of both the NSA and US Cyber Command (Cybercom). I refer to the post holder as “Emperor” not merely due to the vast array of power at the hands of the head of NSA/Cybercom, but also because such an alliance is antithetical to a transparent and vibrant democracy that believes in separations between its intelligence gathering and war making functions.  (For more on former Emperor Alexander’s conflicts of interests and misdeeds see here.)

The US Code separates the authorities and roles for intelligence gathering (Title 50) from US military operations (Title 10). In other words, it was once believed that intelligence and military operations were separate but complementary in function, and were also limited by different sets of rules and regulations. These may be as mundane as reporting requirements, to more obvious ones about the permissibility of engaging in violent activities. However, with the creation of the NSA/Cybercom Emperor, we have married Title 10 and Title 50 in a rather incestuous way. While it is certainly true that Cybercom and the NSA are both in charge of Signals Intelligence, Cybercom is actively tasked with offensive cyber operations. What this means is that there is serious risk of conflicts of interest between the NSA and Cybercom, as well as a latent identity crises for the Emperor. For instance, if one is constantly putting on and taking off a Title 10 hat for a Title 50 hat, or viewing operations as military operations or intelligence gathering, there will eventually be a merging of both. That both post holders are high ranking military officers means that it is most likely that the character of NSA/Cybercom will be more militaristic, but with the potential for him to issue ex post justifications for various “operations” as intelligence gathering under Title 50, and thus subject to less transparent oversight and reporting.

One might think this fear mongering, but I think not. For example, if the Emperor deems it necessary to engage in an offensive cyber operation that might, say, change the financial transactions or statements of a target, and that part of this operation  is for the US’s role to remain secret. This operation would be tantamount to a covert action as defined under Section 413b(e) of Title 50.   Covert actions have a tumultuous history, but suffice to say, the President can order them directly, and they have rather limited reporting requirements to Congress.   What, however, would be the difference if the same action were ordered by Admiral Rogers in the course of an offensive cyber operation?   The same operation, the same person giving the order, but the difference in international legal regulations and domestic legal regulations is drastic. How could one possibly limit any ex post justification for secrecy if something were to come to light or if harm were inflicted?   The answer is there is no way to do this with the current system. This is because the post holder is simultaneously a military commander and an intelligence authority.

That the Senate has refused to pass a watered down version of NSA reform only further strengthens this position. The NSA is free to collect bulk telephony meta-data, and, moreover, it is free to hold that data for up to five years. It can also query the data without requiring a court order to do so, and is not compelled to make transparent any of its requests to telecom companies. Furthermore, one of the largest reforms necessary—that of separating the functions of the NSA and Cybercom—continues to go unaddressed.  The Emperor, it would seem, is still free to do what he desires.

The Responsibility to Protect & Fear of Foreign Policy Failure

Last week I had the opportunity to partake in a workshop on the Responsibility to Protect (R2P) at The Hague Institute of Global Justice (the Institute). The Institute is preparing to launch a project on R2P, seeking to bring academics, civil society and government/policy makers together to formulate insightful and policy relevant findings on R2P.   As the workshop was governed by Chatham House rules, I will only here note a few of my insights from the workshop, primarily insights about the connections between political will to uphold R2P and the theoretical and practical realities of foreign policy.

R2P is a very broad agenda with multiple loci of responsibility. The first covers the responsibilities of states to protect their own populations against war crimes, crimes against humanity, genocide and ethnic cleansing. A second locus of responsibility is the “international community,” for when states cannot protect their peoples or prevent these crimes, then, it also has an obligation to aid states, through various capacity building and preventive mechanisms. Third and finally, the United Nations Security Council possesses a particular responsibility. When preventive measures fail (or are not forthcoming), then the international community as represented through the United Nations Security Council has the responsibility to use all peaceful means to protect people from the four R2P crimes. If or when those peaceful means fail, then the Security Council has the responsibility to take “timely and decisive” measures, in accordance with Chapter VII of the UN Charter, to protect populations. Such measures include military options, taken with or without the consent of a target state.

These three loci of responsibility track the three Pillars of the doctrine. Pillar One refers to the domestic state’s responsibility as outlined above. Pillar Two addresses the international community’s obligation and commitment to encourage and assist states (through capacity building) to uphold their Pillar One responsibilities. Pillar Three highlights the range of tools, from peaceful to non-peaceful and less coercive to more coercive, available to the Security Council and regional organizations. The pillars, it is thought, are not sequential, and some cases may only invoke or require Pillars One or Two. Regrettably, much of the debate concerning R2P tends to distill to questions about forcible intervention under Pillar Three.

This brings us to last week’s workshop. The brute fact of the matter is that R2P is a state doctrine, and much of the reality in international affairs is that states will only voluntarily undertake actions. In R2P parlance, this means that there is an ongoing question about the “political will” to uphold R2P. The discussion about political will, however, becomes blurred due to several related aspects. First and more generally, when any discussion of political will raises its head, it seems that almost everyone is working from the assumption of the political will to intervene militarily (the Pillar Three responsibility). Yet R2P proponents are quick to point out that R2P is more than this, as it includes early warning and capacity building.

This leads to a second point. States seem quick to lend rhetorical support for early warning and capacity building, but the discussion ends there. It seems, at least to me, that we ought to press them then to make more explicit commitments on these fronts. Development is linked to prevention, and perhaps we ought to change the background assumptions about political will from intervention to state building.

If this is too strong, as many states are unwilling to engage in prolonged state building enterprises, then there ought to be an open and pressing discussion about peacekeeping. If states are unwilling or unable to open their wallets, then perhaps they would be willing to provide troops. For example, as Perry and Smith note, North America and Europe have the lowest levels of troop contributions compared to Asia and Africa. A keen example is the United Kingdom, which consistently contributes around .5% of peacekeepers worldwide. Some might think that these countries are already fulfilling their obligations through foreign aid, so they are under no other or further obligations to supply peacekeepers, but this logic is unsound for a variety of reasons. Least amongst them, it overlooks the sad fact that we have no way under the current R2P doctrine to say who and who has not fulfilled their obligations or even how those obligations could be fulfilled. (See here, here and here for some discussions about this issue.)

Moreover, the gendered division of peacekeepers is also noteworthy and ought to be pressed upon from an R2P perspective. If one is looking for a way to not only keep the peace, but also to build capacity, then it would seem that including more female peacekeepers could kill two birds with one stone.   The level of gender equality is seen as a factor in conflict emergence, and if one could mitigate at least small levels of gender inequality while simultaneously saving lives, then this seems like an obvious win. However, looking at the data for female troop contributions, Crawford, Lebovic and Macdonald find that between 2009-2011 “86 percent of countries contributed no female personnel to an average mission in all three years, and 99 percent of countries contributed no female personnel to an average mission in at least one of the three years, under consideration.” Capacity building and timely response seem inherently linked on this issue.

Though what is apparent from the discussions last week and the reality of R2P is that states are unwilling to commit themselves or their peoples to anything that may end up looking like foreign policy failure. Even if we can divide R2P along the three pillars, states implicitly understand that if they sign on to more than their own responsibility for R2P crimes, this may end up committing them to foreign policy agendas that they deem too risky or too costly.   As Feaver and Gelpi argue in their work, states are willing to take on costs, particularly costs in lives, if they are seen to be “winning.” Casualty aversion only becomes a key concern for states when they are losing their foreign policy battles. While the cases are different, Feaver and Gelpi’s findings are illustrative here. Whatever foreign policy goals states set for themselves, they must be able to formulate them in such a way that they can ultimately “win.” Given that R2P is so wide ranging, covering everything from developing constitutions, building infrastructure, advocating for open democracy, calling for inclusive education of citizens, as well as (non)coercive measures to force states to abide by their obligations, it is, in a sense, a foreign policy nightmare. No statesperson could adequately formulate a policy framework that could be operationalized in a way where states could show that they upheld their responsibilities, did what they could, as well as succeeded in their efforts, and were not also on the hook for more.

Some might object and say that there are R2P successes. To be sure, there are, but there are also so many “failures” that the variation in foreign policy responses as well as the success rate tell us very little about the conditions for states to act, let alone act and succeed. While states are willing to note that they and the international community have a responsibility to protect, they are unwilling to talk about the finer details, and it is my worry that this is because of the vast expanse of the doctrine itself. If states cannot be seen to win and succeed, then they will either refrain from embarking on an R2P activity, or they will choose to do so from the shadows. Risk of foreign policy failure is, then, inherently linked to the discussion of political will, and it is high time we see that the doctrine itself is breeding its own limitations.

Cyber Letters of Marque and Reprisal: "Hacking Back"

In the thirteenth century, before the rise of the “modern” state, private enforcement mechanisms reigned supreme. In fact, because monarchs of the time had difficulties enforcing laws within their jurisdictions, the practice of private individuals enforcing their rights was so widespread that for the sovereign to be able to “reign supreme” while his subjects simultaneously acted as judge, jury and executioner, the practice of issuing “letters of marque and reprisal” arose. Merchants traveling from town to town or even on the high seas often became the victims of pirates, brigands and thieves. Yet these merchants had no means of redress, especially when they were outside the jurisdiction of their states. Thus the victim of a robbery often sought to take back some measure of what was lost, usually in like property or in proportionate value.

The sovereign saw this practice of private enforcement as a threat to his sovereign powers, and so regulated the practice through the letters of marque. A subject would appeal to his sovereign, giving a description of what transpired and then asking permission to go on a counterattack against the offending party. The trouble was, however, that often the offending party was nowhere to be found. Thus what ended up happening is that the reprisals carried out against an “offending” party usually ended up being carried out against the population or community from which the brigand originated. The effect of this practice, interestingly, was to foster greater communal bonds and ties and cement the rise of the modern state.

One might ask at this point, what do letters of marque and reprisal have to do with cybersecurity? A lot, I think. Recently, the Washington Post reported that there is increasing interest in condoning “hacking back” against cyber attackers. Hacking back, or “active defense,” is basically attempting to trace the origins of an attack, and then gain access to that network or system. With all of the growing concern about the massive amounts of data stolen from the likes of Microsoft, Target, Home Depot, JPMorgan Chase and nameless others, the ability to “hack back” and potentially do malicious harm to those responsible for data theft appears attractive.   Indeed Patrick Lin argues we ought to consider a cyber version of “stand your ground” where an individual is authorized to defend her network, data or computer. Lin also thinks that such a law may reduce the likelihood of cyberwar because one would not need to engage or even to consult with the state, thereby implicating it in “war crimes.” As Lin states “a key virtue of “Stand Your Cyberground” is that it avoids the unsolved and paralyzing question of what a state’s response can be, legally and ethically, against foreign-based attacks.”

Yet this seems to be the opposite approach to take, especially given the nature of private enforcement, state sovereignty and responsibility. States may be interested in private companies defending their own networks, but one of the primary purposes of a state is to provide for public—not private—law enforcement.   John Locke famously quipped in his 2nd Treatise that the problem of who shall judge becomes an “inconvenience” in the state of nature, thereby giving rise to increased uses of force, then war, and ultimately requires the institution of public civil authority to judge disputes and enforce the law. Cyber “stand your ground” or private hack backs places us squarely back in Locke’s inconvenient state.

Moreover, it runs contrary to the notion of state sovereignty. While many might claim that the Internet and the cyber domain show the weakness in sovereignty, they do not do away with it. Indeed, if we are to learn anything from the history of private enforcement and state jurisdiction, sovereignty requires that the state sanction such behavior. The state would have to issue something tantamount to a letter of marque and reprisal. It would have to permit a private individual or company to seek recompense for its damage or data lost. Yet this is, of course, increasingly difficult for at least two reasons. The first is attribution. I will not belabor the point about the difficulty of attribution, which Lin seems to dismiss by stating that “the identities of even true pirates and robbers–or even enemy snipers in wartime–aren’t usually determined before the counterattack; so insisting on attribution before use of force appears to be an impossible standard.” True attribution for cyber attacks is a lengthy and time-consuming process, often requiring human agents on the ground, and it is not merely about tracing an IP address to a botnet.  True identities are hard to come by, and equating a large cyber attack to a sniper is unhelpful. We may not need to know the social security number of a sniper, but we are clear that the person with the gun in the bell-tower is the one shooting at us, and this permits us to use force in defense.   With a botnet or a spoofed IP address, we are uncertain where the shots are really coming from. Indeed, it makes more sense to think of it like hiring a string of hit men, each hiring a subcontractor, and we are trying to find out who we have a right of self-defense against; is it the person hiring or the hit men or both?

Second, even if we could engage a cyber letter of marque we would have to have some metric to establish a proportionate cyber counter-attack.   Yet what are identities, credit card numbers, or other types of “sensitive data” worth? What if they never get used? Is it then merely the intrusion? Proportionality in this case is not a cut and dry issue.

Finally, if we have learned anything about the history or letters of marque and reprisal, then it is that they went out of favor. States realized that private enforcement, which then turned to public reprisals during the 18th to early 20th centuries, merely encouraged more force in international affairs. Currently the modern international legal system calls acts that are coercive, but not uses of force (i.e. acts that would violate Article 2(4) of the United Nations Charter), countermeasures. The international community and individual states not longer issue letters of marque and reprisal. Instead, when states have their rights violated (or an ‘internationally wrongful act’ taken against them), they utilize arbitration or countermeasures to seek redress. For a state to take lawful countermeasures, however, requires that it determine the responsible state for the wrongful act in question. Yet cyber attacks, if we are to rely on what the professional cybersecurity experts tell us, are sophisticated in that they hide their identities and origins. Moreover, even if one finds out the origin of the attack, this may be insufficient to ground a state’s responsibility for the act. There is always the deniability that the state issued a command or hired a “cyber criminal gang.” Thus countermeasures against a state in this framework may be illegal.

What all this means is that if we do not want ignore current international law, or the teachings of history, we cannot condone private companies “hacking back.” The only way one could condone it is for the state to legalize it, and if this were the case, then it would be just like the state issuing letters of marque and reprisal. Yet by legalizing such a practice, it may open up those states to countermeasures by other states. Given that most of the Internet traffic goes through the United States (US), that means that many “attributable” attacks will look like they are coming from the US.   This in turn means that many states would then have reason to cyber attack the US, thereby increasing and not decreasing the likelihood of cyberwar.   Any proposal to condone retaliatory private enforcement in cyberspace should, therefore, be met with caution.

U.S. Options Limited Due to Will and Not Lack of Drones

Today, Kate Brannen’s piece in Foreign Policy sent mixed messages with regard to the US-led coalition fighting the Islamic State (IS).  She reports that the US is balancing demands “For intelligence, surveillance, and reconnaissance (ISR) assets across Iraq and Syria with keeping an eye on Afghanistan”. The implication, which the title of her piece implies, is that if the US just had more “drones” over Syria, it would be able to fight IS more adeptly.   The problem, however, is that her argument is not only misleading, it is also dismissive of the Arab allies’ human intelligence contributions.

While Brannen is right to note that the US has many of its unmanned assets in Afghanistan and that this will certainly change with the upcoming troop draw down there, it is not at all clear why moving those assets to Syria will yield any better advantage against IS. Remotely piloted aircraft (RPA) are only useful in permissive air environments, or an environment where one’s air assets will not face any obstructions or attacks. The US’s recent experience with its drone operations abroad have been mostly all permissive environments, and as such, it is able to fly ISR missions – and combat ones as well – without interference from an adversary.   The fight against IS, however, is not a permissive environment. It may range from non-permissive to hostile, depending upon the area and the capabilities of IS at the time.   We know that IS has air defense capabilities, and so these may interfere with operations.   What is more, we also know that RPAs are highly vulnerable to air defense systems and are inappropriate for hostile and contested air spaces. NATO recently published a report outlining the details of this fact.   Thus before we claim that more “drones” will help the fight against IS, we ought to look very carefully at the operational appropriateness of them.

A secondary, but equally important, the point in Brannen’s argument concerns the exportation of unmanned technology. She writes,

“According to the senior Defense Department official, members of the coalition against the Islamic State are making small contributions in terms of ISR capabilities, but it’s going to take time to get them more fully integrated. U.S. export policy is partly to blame for the limits on coalition members when it comes to airborne surveillance, Scharre said. ‘The U.S. has been very reluctant to export its unmanned aircraft, even with close allies.’ ‘There are countries we will export the Joint Strike Fighter to, but that we will not sell an armed Reaper to,’ [Scharre] said.”

The shift from discussing ISR capabilities to exportation of armed unmanned systems may go unnoticed by many, but it is a very important point. We might bemoan the fact that the US’s Arab partners are making “small [ISR] contributions” to the fight against IS, but providing them with unarmed, let alone armed, unmanned platforms may not fix the situation. As I noted above, they may be shot down if flown in inappropriate circumstances.   Moreover, if the US wants to remain dominant in the unmanned systems arena, then it will want to be very selective about exporting it. Drone proliferation is already occurring, with the majority of the world’s countries in possession of some type of unmanned system. While those states may not possess medium or high altitude armed systems, there is worry that it is only a matter of time until they do. For example, arming the Kurds with Global Hawks or Reapers will not fix this situation, and may only upset an already delicate balance between the allies.

Proliferation and technological superiority remain a constant concern for the US. Which is why, taken in conjunction with the known limitations of existing unmanned platforms, there has not been a rush to either export or move the remaining drone fleet in Afghanistan to Syria and Iraq. IS is a different enemy than the Taliban in Afghanistan or the “terrorists” in Yemen, Pakistan or Somalia.  IS possess US military hardware, they are battle hardened, have a will to fight and die, and are capable of tactical and operational strategizing. Engagement with them will require forces up close and on the ground, and supporting that kind of fighting from the air is better done with close air support. Thus it is telling that the US is sending in Apache helicopters to aid the fight but not moving more drones.

ISR is of course a necessity. No one denies this. However, to claim that this can only be achieved from 60,000 feet is misleading. ISR comes from a range of sources, from human ones to satellite images.  Implying that our Arab allies are merely contributing a “small amount” to ISR dismisses their well-placed intelligence capabilities. Jordan, for example, can provide better on the ground assessment than the US can, as the US lacks the will to put “boots on the ground” to gather those sources.  Such claims also send a message to these states that their efforts and lives are not enough. When in fact, the US is relying just as heavily on those boots as they are relying on our ISR.

 

ISIS, Syria, the Rebels and the US-Led Coalition: What Governs Who?

In a phone call today with a friend working on issues pertaining to the Responsibility to Protect (R2P), an interesting question arose. In particular, what types of conflict are going on with the fight against ISIS? My friend wanted to draw attention to the R2P aspects of the crisis, and whether the “intervention” on the side of the US was just according to these standards. While this is certainly an interesting question, I think it points us in the direction of a larger set of questions regarding the nature of the conflict itself. That is, what are the existing laws with which we ought to view the unfolding situation inside Syria? The complexity of the situation, while definitely a headache for strategists and politicians, is going to become equally difficult for international lawyers too. In particular the case has at least two different bodies of law at work, as well as laws pertaining to R2P crimes. Thus any action within Syria against ISIS, or Al-Qaeda, or Assad, or the rebels will have to be dealt with relationally.

Let us look to the case. Syria has been experiencing civil war for three years. Assad’s violations the rights of his people mean that he has manifestly failed to uphold the Responsibility to Protect Doctrine. R2P requires that states hold the primary responsibility to protect their peoples from genocide, ethnic cleansing, war crimes and crimes against humanity. Given Assad’s use of chemical weapons and cluster munitions, as well as targeting civilian populations, he has clearly committed war crimes and crimes against humanity. That Assad has employed the Shabiha, a private paramilitary force, to engage in killing means that he has also more than likely engaged in ethnic cleansing as well. In a perfect world, the Security Council would have acted in a “timely and decisive manner” to stop such abuses, and would have referred the case to the International Criminal Court (ICC) for prosecution. Of course, in May of this year, 53 countries urged the Security Council to refer the situation to the ICC. A mere two days later, Russia and China blocked the referral to the ICC by utilizing their permanent veto powers.   Three years of bloodshed, civil breakdown, hundreds of thousands dead, and three million of refugees, it is too clear that there was no desire to intervene in the crisis.   Thus we can say that there is an ongoing R2P crisis, and that Assad—as leader of the government of Syria—ought to be held to account for these acts. Moreover, there is a failure of the international community to live up to its obligations (as it voluntarily incurred under the 2005 World Summit Outcome Document).

The sheer destruction and violence inside Syria is what permitted the rise of ISIS. This seems an indisputable fact.   The group capitalized on the civil war and breakdown, the tensions between and factionalization of the Syrian rebel groups, and the international community’s reluctance to engage Assad.   Thus until ISIS pushed into Iraq, the international community would probably have let it be. Moreover, international law would have deemed the issue one of a non-international armed conflict.   However, once ISIS set its sights on the Mosul Dam, the international community began to wake up.

With this act, ISIS transformed the non-international armed conflict into a two-dimensional one. In other words, it added an international dimension too. Thus as the fighting between the rebels and the Assad regime continued (and continues) to be a non-international armed conflict, but the fighting of ISIS in Iraq meant that ISIS-Iraq-Kurd conflict is international. If one doubts this reading, then it would have at least become a transnational armed conflict at the very least, but because ISIS targeted Iraqi infrastructure, it seems more likely that this single act transformed the conflict into an international one.

Now that the US and other regional powers have entered the fray, it is most definitely an international armed conflict – between ISIS and these states. However, we must still remember that the civil war between Assad and the various rebel fighters is also still ongoing (as well, presumably between ISIS vs. Assad). Thus there is still a non-international armed conflict here too. And, let us not forget, R2P and Assad!

What does this all mean? Well, in short it means that the only way to tell which set of laws applies is to look at the relation of the parties at any given moment. The casuistry here will become the all-important determining factor. For example, if the US trains and arms “moderate” Syrian rebels, one would have to look at the particular operation to determine which set of laws applies. Is the operation one undertaken in support or in concert with the US-led coalition against ISIS? Yes? Then international humanitarian law applies. Is the operation undertaken by these trained and armed rebels one against the Assad regime? Yes? Well, then this may or may not be a non-international armed conflict. The International Court of Justice, for instance, holds that in the case of third party intervention in support of a rebel group, the third party needs to have “overall control” of the rebel group for that conflict to be considered “internationalized.” Given the different rebel groups, this could become a daunting analysis. Is control of one sufficient to say it is for “all?” Or just this one group?

These little details matter because the law of international armed conflict is much more robust than the law pertaining to non-international armed conflict. As the International Committee of the Red Cross notes:

“Although the existence of so many provisions and treaties may appear to be sufficient, the treaty rules applicable in non-international armed conflicts are, in fact, rudimentary compared to those applicable in international armed conflicts. Not only are there fewer of these treaty rules, but they are also less detailed and, in the case of Additional Protocol II, their application is dependent on the specific situations described above.”

In other words, there are gaps in the protection of rights, persons, property and the environment relating to non-international armed conflict that do not exist in international humanitarian law (i.e. international armed conflict).   Thus the case of ISIS challenges the international community in more ways than one. It is not that there are not laws applying to these conflicts, but that the conflicts are so convoluted that the states and parties to this conflict, as well as potential international prosecutors, will rely on so much more circumstantial evidence to sort out the details about what is permissible and when. This, however, is not something likely to happen ex ante in targeting operations, training and arming. I fear that while there are overlapping jurisdictions of rules and laws here, the convoluted nature will engender an even greater realm of permissiveness and the parties to the conflicts will end up transferring more risk and harm to the bystanders. Civilians always suffer, to be sure, but the laws of war are supposed to mitigate that suffering. If the laws of war are convoluted because of the complexity of the actors and their relationships, then this will have greater deleterious effects on the lives and rights of noncombatants.

Obama’s ISIS Strategy: A Clausewitzian Perspective

Much ink has been spilled over the last few days concerning President Obama’s speech on Wednesday evening regarding ISIS, as well as how his strategy will face many challenges going forward. Some cite that he does not go far enough, others that he has not fully laid out what to do in Syria when he has to face a potential deal with Assad. I, however, would like to pause and ask about the motivations on each side of this conflict, and whether we have any indications about how the asymmetry of motivations may affect the efficacy of Obama’s campaign. Moreover, we ought to also look to how this strategy is designed to reach the end goal (whatever that may be).

Clausewitz’s famous “trinity” is helpful here, and it is worth quoting him in full:

“War is more than a true chameleon that slightly adapts its characteristics to the given case. As a total phenomenon its dominant tendencies always make war a paradoxical trinity–composed of primordial violence, hatred, and enmity, which are to be regarded as a blind natural force; of the play of chance and probability within which the creative spirit is free to roam; and of its element of subordination, as an instrument of policy, which makes it subject to reason alone.

The first of these three aspects mainly concerns the people; the second the commander and his army; the third the government. The passions that are to be kindled in war must already be inherent in the people; the scope which the play of courage and talent will enjoy in the realm of probability and chance depends on the particular character of the commander and the army; but the political aims are the business of government alone.”

While Clausewitz is here looking only to one side of the equation, ignoring the same trinity at work on the adversary’s side, it is helpful for us today. In particular, Clausewitz’s focus on the people – the passions of the people – to wage war are a key component of the discussion about US involvement in Iraq and Syria. Without such a will to fight, the war effort will be hampered. Indeed we can see evidence of this when one looks to the scholarly work on coercive diplomacy.

As Alexander George argues, “what is demanded of the opponent, and his motivation to resist are closely related […] there is often an important strategic dimension to the choice of the objective.” Indeed, he goes on to argue that coercive diplomacy is most likely to successful when there is an “asymmetry of interests,” where the coercing power has more motivation to fight and back up his threat to fight than the target. Even then, there is a poor success rate (32%).

While it is certainly true that the “fight” against ISIS is not really a classic case of coercive diplomacy, at this point it does not feel like a Clausewitzian conventional war either. President Obama’s reluctance to engage in ground combat, and his restriction of US military force to training and air power is a signal that his interests, while strong enough to engage, are not strong enough for more than “limited” war. That he will rely on Iraqi and Kurdish forces, as well as the disparate patchwork of “moderate” Syrian rebels to do the ground fighting is case in point. The asymmetry of interests, as it stands now, favors ISIS and not the US.

This leads us to the second part of the trinity: chance, probability and the commander. While Clausewitz does speak of the genius of a commander, one with a coup d’œil, this presupposes that the commander (or general) truly understands the adversary, the forces – his own, his allies and the adversary’s – and is able to augur the adversary’s strategies and tactics. John Allen, retired four-star Marine general, has been tapped to lead the fight against ISIS.  While Allen is certainly talented and experienced with coalition actions and counterinsurgency strategies (COIN), fighting against ISIS is a different game. First, the coalition in Afghanistan was a NATO-led one, meaning that the soldiers Allen had to oversee where professional soldiers who have for decades engaged in mutual training exercises together. They train together to ensure interoperability. The coalition in Iraq/Syria will not look even remotely like this. Second, fighting a counterterrorism campaign requires different tactics than regular warfare. ISIS is not wholly one or the other. In other words, the US military, in conjunction with its allies can attack the ISIS combatants and materiel, but this will not “defeat” ISIS. ISIS is an ideology as much as it is a group of brutal extremists. Allen, for all his experience in Afghanistan cannot rely on this as a heuristic when facing ISIS, for any strategy going forward will have to blend COIN, conventional and unconventional war.

Finally, if we are to learn from the Prussian strategist, we must look back to President Obama and his Joint Chiefs. The political goals must be clearly defined. Strategies without a clear objective are useless to the commanders, the warfighters, and all those who suffer under hostilities.   President Obama declared: “Our objective is clear:  We will degrade, and ultimately destroy, ISIL through a comprehensive and sustained counterterrorism strategy.” There are two problems with this perspective.   First, the only “interest” that the US has here is that ISIS (or ISIL) is a distant but potential threat to the US. It is unclear how probable this threat is, let alone how imminent. Prudence, law and morality dictate that one ought only to respond to imminent – that is temporally impending – threats. We are left wondering when it comes to this one. Thus the strategic goal is not to protect the US (this is a secondary or side-effect of the Obama’s objective). Rather, the goal is to degrade and destroy.

This brings us to the second point: there is a fair bit of daylight between degrading an adversary’s ability to act and destroying it.   The first involves a denial strategy, whereby the US and its allies would undermine or make it increasingly difficult ISIS’ ability to achieve its military (and presumably political) objectives. But denial strategies do not involve eliminating an entire force. While ISIS is certainly liable to attack, and any fighter within its ranks is a legitimate target, there are still some rules that would prohibit wholesale slaughter. What if the US begins its campaign, and deals significant blows to ISIS? What if the ISIS fighters start surrendering? They have combatant rights: they wear insignia, carry their arms openly, and are in a hierarchical command structure. Thus, the US and its allies are obligated to give them prisoner of war status. But here is the rub: destroying ISIS, because it is an ideology, would require the wholesale slaughter of all ISIS fighters. But this is clearly immoral and impermissible, not to mention it would not be a full guarantee that the symbol of killing them would generate only more fighters taking up the black flag. Thus one can never wholly destroy ISIS in the way President Obama lays out. I wrote before that one can only destroy ISIS when one takes away the need for it. What this means is that even if the US and its coalition are able to stop this atrocious group militarily, it will require post-conflict reconstruction, jobs, education, healthcare, and rebuilding the rule of law. This is a fact – if the US wants to “destroy” ISIS. The other uncomfortable truth is that post-conflict strategies are going to be increasingly difficult when Assad is still in power and a civil war still rages on. Thus if the US holds tightly to its “strategy,” it should be very careful about expanding its war aims beyond ISIS to the Assad regime, for otherwise the US and many others will end up tumbling down the rabbit hole (again).

Obama's "Lack" of Strategy Towards ISIS

The last two days have seen a maelstrom of media attention to President Obama’s admission that he currently does not have a strategy for attacking or containing ISIS (The Islamic State in Iraq and Syria) in Syria.   It is no surprise that those on the right criticized Obama’s candid remarks, and it is equally not surprising that the left is attempting some sort of damage control, noting that perhaps the “no strategy” comment is really Obama holding his cards close to his chest.   What seems to be missing from any of the discussion is what exactly he meant by “strategy,” and moreover, the difficult question of the end he would be seeking.

Let’s take the easy part first. Strategy, at least for the military, has a very particular meaning. It is about ends, ways and means of a military character. Indeed, strategy, as distinct from operational planning and tactics, is about the overall end state of a war (or “limited” war).   The strategic goal, therefore, is about the desired state of affairs post bellum. It requires that one ask: What is it that I want to achieve? How would I get there through the use of force? “Strategy” is not tantamount to “planning,” and for the strategist, ought to be reserved for strictly military activities.

Once one identifies the desired end, one must then take this goal and break it down into more manageable pieces through another two levels: operations and tactics. The operational level concerns the middle term: it something beyond a particular tactic (say aerial bombardment of an enemy’s rear line), to something broader, say a collection of missions. All the operations ought to be directed toward some particular portion of the overall strategy.   At each level a commander is issued a set of commands, and each commander then takes her orders and operationalizes them into how she thinks to best achieve those orders (commander’s intent). She does so by consulting with a variety of reporting officers (weaponeers, logistics, lawyers, etc.) This is a hierarchical and a horizontal process, and it always feeds back upon itself to ensure those goals are in fact being achieved.   Or, at least, this is how the process ought to go.

It is, therefore, laudable that President Obama admitted that he does not yet have a strategy for dealing with ISIS in Syria. Why? Because, the desired “end goal,” of which any strategy necessarily requires, is not yet clear. Does the US want to “defeat” ISIS? Surely that is part of the equation, as Secretary of State Kerry called it a “cancer.”   Yet there is more to this tale than merely quashing a group of radicalized, well-organized and heavily armed nonstate actors.  The US military power could do this relatively quickly, if it desired to do so.   But this would not “defeat” ISIS in the way of seeking a better peace or achieving one’s end goal. For taking it out does not entail that justice and harmony will prevail.

This brings us to the second and more difficult question: What is the desired end goal? While I am not privy to the Commander-in-Chief’s thought processes, nor am I present with the Joint Chiefs of Staff in their briefings to the President, but as a student of strategy and an observer and academic, it appears to me that the President has not adequately formulated what this end goal ought to be yet. If one truly desires that ISIS is “defeated” this will take more than air strikes, it will take more than (whoever’s) boots on the ground.   It will take establishing the rule of law, providing for basic needs, such as food, security and water, as well as jobs, education, and infrastructure. For ISIS is not a traditional “enemy,” it is a monster made from the blood, havoc, insecurity and fear that have ruled Syria for three years. This new crisis over ISIS does not come from nowhere: over three million Syrians are refugees; over six million are internally displaced; and almost two hundred thousand have died. Bashar al-Assad’s crimes against humanity and war crimes provided the incubator for ISIS. Moreover, the world’s—not just the US’s—failure to do anything to protect the Syrian people and respond to Mr. Assad’s crimes generated an expanse for ISIS to grow and consolidate. That the international community manifestly failed in its responsibility to protect the Syrian people is obvious, and it is equally obvious that one cannot ignore a crisis and think it will just go away.

Recall that at the very beginnings of the Syrian crisis, up until the (in)famous “red line” of chemical weapons, the US could not garner support from its allies or from its own people. The geopolitical situation then, while heavily dictated by Iran and Russia, is not much different. To be sure, Russia is clearly on its own dangerous course in Ukraine, and Iran has ISIS in its backyard, but there is no upwelling of international support to this cause.

Secretary of State Kerry’s op-ed in the New York Times calls for a “global coalition” to fight ISIS. Whether he realizes that this threat is not just about ISIS, that ISIS is merely a Golgothan of the Syrian civil war, is yet to be seen. To actually “defeat” ISIS is to remove the need for ISIS. ISIS has merely filled a Hobbesian vacuum where:

“The notions of Right and Wrong, Justice and Injustice have there no place [in a state of nature]. Where there is no common Power, there is no Law; where no Law, no Injustice. Force and Fraud, are in warre, the two Cardinal Vertues. Justice, and Injustice are none of the Faculties neither of the Body, nor Mind. […] They are Qualities, that relate to men in Society, not in Solitude” (Hobbes, Leviathan, Chapter 13, para. 63.)

Yet if we view the fight against ISIS beyond the mere military victory, it is a fight against ideology, insecurity, and fear. Indeed it does require a global coalition, but one directed towards the establishment of peace and security in the Middle East – and beyond – and the protection of human rights and the rule of law. In this, it requires states to look beyond their immediate self-interests. Therefore, I am actually happy to see the President give pause. For maybe, just maybe, he too sees that the problem is larger than dropping tons of ordinance on an already destroyed nation. Maybe, just maybe, he sees that ISIS can only be defeated through broader cosmopolitan principles of justice.   If this is too tall an order, then he must tread very carefully while formulating his restricted and “limited” strategy.

Monstermind or the Doomsday Machine? Autonomous Cyberwarfare

Today in Wired magazine, James Bamford published a seven-page story and interview with Edward Snowden. The interview is another unique look into the life and motivations of one of America’s most (in)famous whistleblowers; it is also another step in revealing the depth and technological capacity of the National Security Agency (NSA) to wage cyberwar. What is most disturbing about today’s revelations is not merely what it entails from a privacy perspective, which is certainly important, but from an international legal and moral perspective.   Snowden tells us that the NSA is utilizing a program called “Monstermind.” Monstermind automatically hunts “for the beginnings of a foreign cyberattack. [… And then] would automatically block it from entering the country – a “kill” in cyber terminology.” While this seems particularly useful, and morally and legally unproblematic, as it is a defensive asset, Monstermind adds another not so unproblematic capability: autonomously “firing back” at the attacker.

Snowden cites two problems with this new tactic. First, he claims that it would require access to “all [Internet] traffic flows” coming in and outside of the US. This means in turn that the NSA is “violating the Fourth Amendment, seizing private communications without a warrant, without probable cause or even a suspicion of wrongdoing. For everyone, all the time.” Second, he thinks it could accidentally start a war. More than this, it could accidentally start a war with an innocent third party because an attacking party could spoof the origin of the attack to make it look like another country is responsible. In cyber jargon, this is the “attribution problem” where one cannot with certainty attribute an attack to a particular party.

I however would like to raise another set of concerns in addition to Snowden’s: that the US is knowingly violating international humanitarian law (IHL) and acting against just war principles. First, through automated or autonomous responses, the US cannot by definition consider or uphold Article 52 of Additional Protocol I of the Geneva Conventions. It will violate Article 52 on at least two grounds. First, it will violate Article 52(2), which requires states to limit their attacks to military objectives. These include “those objects which by their nature, location, purpose or use make an effective contribution to military action and whose total or partial destruction, capture or neutralization, in the circumstances ruling at the time, offers a definite military advantage.” While one might object that the US has not ratified Additional Protocol I, it is still widely held as a customary rule. Even if one still holds this is not enough, we can still claim that autonomous cyber attacks violate US targeting doctrine (and thus Article 52(2)) because this doctrine requires that any military objective be created by a military commander and vetted by a Judge Advocate General, ensuring that targeting is compliant with (IHL). That a computer system strikes “back” without direction from a human being undermines the entire targeting process. Given that the defensive capacity to “kill” the attack is present, there seems no good reason to counter-strike without human oversight. Second, striking back at an ostensibly “guilty” network will more than likely have significant effect on civilian networks, property and functionality. This would violate the principle of distinction, laid down in Article 52(1).

If one still wanted to claim that the NSA is not a military unit, and any “strike back” cyber attack is not one taken under hostilities (thereby not being governed under IHL), then we would still require an entire theory (and body of law) of what constitutes a legitimate use of force in international law that does not violate the United Nations charter, particularly Article 2(4), which prohibits states from using or threatening to use force. One might object that a cyber attack that does not result in property damage or the loss of life is not subject to this prohibition. However, taking the view that such an attack does not rise to the level of an armed attack in international law (see for instance the Tallinn Manual), does not mean that such an attack is not a use of force, and thus still prohibited. Furthermore, defensive uses of force in international law are permissible only if they rise to the level of an armed attack (Article 51).

Second, autonomous cyber attacks cannot satisfy the just war principles of proportionality. The first proportionality principle has to do with ad bellum considerations of whether or not it is permissible to go to war. While we may view the “strike” as not engaging in war, or that it is a different kind of war, is another question for another day. Today, however, all we ought to consider is that a computer program automatically responds in some manner (which we do not know) to an attack (presumably preemptively). That response may trigger an additional response from the initial attacker – either automatically or not. (This is Snowden’s fear of accidental war.) Jus ad bellum proportionality requires a balancing of all the harms to be weighed against the benefits of engaging in hostilities. Yet, this program vitiates the very difficult considerations required. In fact, it removes the capacity for such deliberation.

The second proportionality principle that Monstermind violates is the in bello version. This version requires that one use the least amount of force necessary to achieve one’s goals. One wants to temper the violence used in the course of war, to minimize destruction, death and harm.   The issue with Monstermind is that prior to any identification of attack, and any “kill” of an incoming attack, someone has to create and set into motion the second step of “striking back.” However, it is very difficult, even in times of kinetic war, to proportionately respond to an attack. Is x amount of force enough? Is it too much? How can one preprogram a “strike back attack” to a situation that may or may not fit the proportionality envisioned by an NSA programmer at any given time? Can a programmer put herself into a position to envision how she would act at a given time to a particular threat (this is what Danks and Danks (2013) identify as the “future self-projection bias). Moreover, if this is a “one-size-fits-all” model of a “strike back” then that entails that it cannot by definition satisfy in bello proportionality because each situation will require a different type of response to ensure that one is using the minimal amount of force possible.

What all of this tells us, is that the NSA is engaging in cyberwar, autonomously, automatically and without our or our adversaries’ knowledge. In essence it has created not Monstermind, but the Doomsday Machine. It has created a machine that possesses an “automated and irrevocable decision making process which rules out human meddling” and thus “is terrifying, simple to understand, and completely credible and convincing” now that we know about it.

Older posts Newer posts

© 2017 Duck of Minerva

Theme by Anders NorenUp ↑