Tag: international law (page 1 of 5)

The “Right” to Be Forgotten & Digital Leviathans

We hear every day that technology is changing rapidly, and that we are at risk of others violating our rights through digital means.   We hear about cyber attacks that steal data, such as credit card numbers, social security numbers, names, incomes, or addresses. We hear about attacks that steal intellectual property, from movies to plans for the F-35 Joint Strike Fighter. Indeed, we face a continual onslaught from not only the cyber criminals, but from the media as well. One of the lesser-reported issues in the US, however, has been a different discussion about data and rights protection: the right to be forgotten.

Last year, The European Court of Justice ruled in Google vs. Costeja that European citizens have the right, under certain circumstances, to request search engines like Google, to remove links that contain personal information about them. The Court held that in instances where data is “inaccurate, inadequate, irrelevant or excessive” individuals may request the information to be erased and delinked from the search engines. This “right to be forgotten” is a right that is intended to support and complement an individual’s privacy rights. It is not absolute, but must be balanced “against other fundamental rights, such as freedom of expression and of the media” (paragraph 85 of the ruling). In the case of Costeja, he asked that a 1998 article in a Spanish newspaper be delinked from his name, for in that article, information pertaining to an auction of his foreclosed home appeared. Mr. Costeja subsequently paid the debt, and so on these grounds, the Court ruled that the link to his information was no longer relevant. The case did not state that information regarding Mr. Costeja has to be erased, or that the newspaper article eliminated, merely that the search engine result did not need to make this particular information “ubiquitous.” The idea is that in an age of instantaneous and ubiquitous information about private details, individuals have a right to try to balance their personal privacy against other rights, such as freedom of speech. Continue reading

FacebookTwitterGoogle+TumblrRedditShare

Meaningful or Meaningless Control

In May of 2014, the United Nations Convention on Conventional Weapons (CCW) first considered the issue of banning lethal autonomous weapons. Before the start of the informal expert meetings, Article 36 circulated a memorandum on the concept of “meaningful human control.” The document attempted to frame the discussion around the varying degrees of control over increasingly automated (and potentially autonomous) weapons systems in contemporary combat. In particular, Article 36 posed the question as one about the appropriate balance of control over a weapons system that can operate independently of an operator in a defined geographical area for a particular period of time ought to be. Article 36 does not define “meaningful control,” but rather seeks to generate discussion about how much control ought to be present, what “meaningful” entails, and how computer programming can enable or inhibit human control. The state parties at the CCW agreed that this terminology was crucial and that no weapons systems that lacked meaningful human control ought to be deployed. The Duck’s Charli Carpenter has written about this as well, here.

Last month in October, the United Nations Institute for Disarmament Research (UNIDIR) held a conference on the concept of meaningful human control.   Earlier this month, states again convened in Geneva at another CCW meeting and agreed to further consider the matter in April of 2015. Moreover, other civil society groups are also now beginning to think about what this approach entails.   It appears, then, that this concept has become a rallying point in the debate over autonomous weapons. Yet while we have a common term with which to agree, we are not clear on what exactly “control” requires, or what proxy’s we could utilize to make control more efficacious, such as geographic or time limits, or what “meaningful” would look like.

Today, I had an engaging discussion with a colleague on a “semi-autonomous” weapon: Lockheed Martin’s Long-Range Anti-Ship Missile (LRASM). One colleague claimed that this missile is in fact an autonomous weapon, as it selects and engages a target. Another colleague, however, claimed that this was not an autonomous weapon because a human being preselects the targets before launching the weapon. Both my colleagues are correct. Yet how can this be so?

The weapon does select and an engage a target after it is launched, and the particular nature of the LRASM is that it can navigate in denied environments where other weapons cannot. It can change course when necessary, and when it finds its way to its preselected targets, it engages in a selection these targets based upon an undisclosed identification mechanism (probably similar in image recognition to other precision guided munitions). LRASM is unique in its navigation and target cuing capabilities, as well as its ability to coordinate with other launched LRASMs. Thus the question about whether it is an autonomous weapon, then, is really a question about meaningful human control.

Is it a question about “control” once the missile reaches its target destination and then “decides” which ship amongst the convoy it will attack? Or is it a question about the selection of the grid or space that the enemy convoy occupies? At what point is the decision about “control” to be made?

I cannot here answer fully this question. However, I can raise two potential avenues for the way forward. One is to consider human control not in terms of a dichotomy (there is either a human being deliberating at every juncture and pulling a trigger or there is not), but in terms of an escalatory ladder. That is, we start with the targeting process, from the commander all the way to a targeteer or weaponeer, and examine how decisions to use lethal force are made and on what basis.   This would at least allow us to understand the different domains (air, land, sea) that we are working within, the types of targets likely found, and the desired goals to be achieved. It would also allow examination of when particular weapons systems enter the discussion. For if we have an understanding of what types of decisions, from various (perhaps automated) types of information, are made along this ladder, then we can determine whether some weapons are appropriate or not. We might even glean what types of weapons are always out of bounds.

Second, if this control ladder is too onerous a task, or perhaps too formulaic and would induce a perverse incentive to create weapons right up to a particular line of automation, then perhaps the best way to think about what “meaningful human control” entails is not to think about its presence, but rather its absence. In other words, what would “meaningless” human control look like? Perhaps it is better to define the concept negatively, by what it is not, rather than what it is. We have examples of this already, particularly with the US’s policy regarding covert action. The 1991 Intelligence Authorization Act defines covert action very vaguely, and then in more concrete terms defines what it is not (e.g. intelligence gathering, traditional or routine military or diplomatic operations, etc.). Thus clear cases of “meaningless” would be to launch a weapon system without undertaking any consideration of the targets, the likely consequences, and the presence of civilian objects or persons. Or to launch a weapon that perpetually patrols. This is of course cold comfort to those who want to ban autonomous weapons outright. Banning weapons would of course require a positive and not negative definition.

States would have to settle the question of whether any targets on a grid are fair game, or if only pre-identified targets on a grid – and not targets of opportunity- are fair game. It may also require states to become transparent about how such targets on a grid are confirmed, or how large a grid one is allowed to use. For if a search area ends up looking like the entire Pacific Ocean, that pesky question about “meaningful” raises its head again.

Privacy, Secrecy & War: Emperor Rogers and the Failure of NSA Reform

On November 3, Britain’s head of the Government Communications Headquarters (GCHQ) published an opinion piece in the Financial Times, noting that technology companies, such as Twitter, Facebook, WhatsApp, (and implying Google and Apple), ought to comply with governments to a greater extent to combat terrorism. When tech companies further encrypt their devices or software, such as what Apple has recently released with the iPhone 6, or what WhatsApp has accomplished with its software, GCHQ chief Hannigan argues that this is tantamount to aiding and abetting terrorists. GCHQ is the sister equivalent of the US’s National Security Agency (NSA), as both are charged with Signals Intelligence and information assurance.

Interestingly, Hannigan’s opinion piece comes only weeks before the US Senate voted on whether to limit the NSA’s ability to conduct bulk telephony meta-data collection, as well as reform aspects of the NSA’s activities. Two days ago, this bill, known as the “USA Freedom Act,” failed to pass by two votes. While Hannigan stressed that companies ought to be more open to compliance with governments’ requests to hand over data, the failure of the USA Freedom Act strengthened at least the US government’s position to continue is mass surveillance of foreign and US citizens.  It remains to be seen how the tech giants will react.

In the meantime, the bill also sought, amongst other things, to make transparent the amount of requests from governments to tech companies, to force the NSA to seek a court order from the Foreign Intelligence Surveillance Court (FISC) to query the (telecom held) data, and to require the NSA to list the “specific selection term” to be used while searching the data. Moreover, the bill would have also mandated an amicus curiae, or “friend of the court,” in the FISC to offer arguments against government requests for searches, data collection and the like, which it currently lacks. Much of these reforms were welcomed by tech companies like Google and Apple and also were suggested in a 2013 report for the White House on NSA and intelligence reform.

Many of the disagreements over the bill arose on two lines: that the bill hamstringed the US’s ability to “fight terrorists,” and that the bill failed to go far enough in protecting the civil liberties of US citizens. This was because the bill would have reauthorized Section 215 of the PATRIOT Act (set to end in 2015) to 2017. Section 215 permits government agents, such as the FBI and the NSA to compel third parties to hand over business records and any “other tangible objects” whenever the government requests them in the pursuance of an “authorized investigation” against international terrorism or clandestine intelligence activities. In particular, Section 215 merely requires the government to present specific facts that would support a “reasonable suspicion” that the person under investigation is in fact an agent of a foreign power or a terrorist. It does not require a showing of probable cause, only a general test of reasonableness, and this concept of reasonableness is stretched to quite a limit.   The democratic support for the bill comes most strongly from Senator Dianne Feinstein (D- Calif), who is reported to have said, “I do not want to end the program [215 bulk collection],” so “I’m prepared to make the compromise, which is that the metadata will be kept by the telecoms.”

Where, then does the failure of this bill leave us? In two places, actually. First, it permits the NSA to run along on with the status quo. Edward Snowden’s revelations of mass surveillance appear to have fallen off of the American people’s radar, and with it, permitted Congress to punt on the issue until its next session. Moreover, given that the next session is a Republican dominated House and Senate, there is high probability that any bill passed will either reaffirm the status quo (i.e. reauthorize Section 215) or potentially strengthen the NSA’s abilities to collect data.

Second, this state of affairs will undoubtedly strengthen the position of Emperor Mike Rogers. Admiral Mike Rogers is the recent replacement of General Keith Alexander, the head of both the NSA and US Cyber Command (Cybercom). I refer to the post holder as “Emperor” not merely due to the vast array of power at the hands of the head of NSA/Cybercom, but also because such an alliance is antithetical to a transparent and vibrant democracy that believes in separations between its intelligence gathering and war making functions.  (For more on former Emperor Alexander’s conflicts of interests and misdeeds see here.)

The US Code separates the authorities and roles for intelligence gathering (Title 50) from US military operations (Title 10). In other words, it was once believed that intelligence and military operations were separate but complementary in function, and were also limited by different sets of rules and regulations. These may be as mundane as reporting requirements, to more obvious ones about the permissibility of engaging in violent activities. However, with the creation of the NSA/Cybercom Emperor, we have married Title 10 and Title 50 in a rather incestuous way. While it is certainly true that Cybercom and the NSA are both in charge of Signals Intelligence, Cybercom is actively tasked with offensive cyber operations. What this means is that there is serious risk of conflicts of interest between the NSA and Cybercom, as well as a latent identity crises for the Emperor. For instance, if one is constantly putting on and taking off a Title 10 hat for a Title 50 hat, or viewing operations as military operations or intelligence gathering, there will eventually be a merging of both. That both post holders are high ranking military officers means that it is most likely that the character of NSA/Cybercom will be more militaristic, but with the potential for him to issue ex post justifications for various “operations” as intelligence gathering under Title 50, and thus subject to less transparent oversight and reporting.

One might think this fear mongering, but I think not. For example, if the Emperor deems it necessary to engage in an offensive cyber operation that might, say, change the financial transactions or statements of a target, and that part of this operation  is for the US’s role to remain secret. This operation would be tantamount to a covert action as defined under Section 413b(e) of Title 50.   Covert actions have a tumultuous history, but suffice to say, the President can order them directly, and they have rather limited reporting requirements to Congress.   What, however, would be the difference if the same action were ordered by Admiral Rogers in the course of an offensive cyber operation?   The same operation, the same person giving the order, but the difference in international legal regulations and domestic legal regulations is drastic. How could one possibly limit any ex post justification for secrecy if something were to come to light or if harm were inflicted?   The answer is there is no way to do this with the current system. This is because the post holder is simultaneously a military commander and an intelligence authority.

That the Senate has refused to pass a watered down version of NSA reform only further strengthens this position. The NSA is free to collect bulk telephony meta-data, and, moreover, it is free to hold that data for up to five years. It can also query the data without requiring a court order to do so, and is not compelled to make transparent any of its requests to telecom companies. Furthermore, one of the largest reforms necessary—that of separating the functions of the NSA and Cybercom—continues to go unaddressed.  The Emperor, it would seem, is still free to do what he desires.

Cyber Letters of Marque and Reprisal: "Hacking Back"

In the thirteenth century, before the rise of the “modern” state, private enforcement mechanisms reigned supreme. In fact, because monarchs of the time had difficulties enforcing laws within their jurisdictions, the practice of private individuals enforcing their rights was so widespread that for the sovereign to be able to “reign supreme” while his subjects simultaneously acted as judge, jury and executioner, the practice of issuing “letters of marque and reprisal” arose. Merchants traveling from town to town or even on the high seas often became the victims of pirates, brigands and thieves. Yet these merchants had no means of redress, especially when they were outside the jurisdiction of their states. Thus the victim of a robbery often sought to take back some measure of what was lost, usually in like property or in proportionate value.

The sovereign saw this practice of private enforcement as a threat to his sovereign powers, and so regulated the practice through the letters of marque. A subject would appeal to his sovereign, giving a description of what transpired and then asking permission to go on a counterattack against the offending party. The trouble was, however, that often the offending party was nowhere to be found. Thus what ended up happening is that the reprisals carried out against an “offending” party usually ended up being carried out against the population or community from which the brigand originated. The effect of this practice, interestingly, was to foster greater communal bonds and ties and cement the rise of the modern state.

One might ask at this point, what do letters of marque and reprisal have to do with cybersecurity? A lot, I think. Recently, the Washington Post reported that there is increasing interest in condoning “hacking back” against cyber attackers. Hacking back, or “active defense,” is basically attempting to trace the origins of an attack, and then gain access to that network or system. With all of the growing concern about the massive amounts of data stolen from the likes of Microsoft, Target, Home Depot, JPMorgan Chase and nameless others, the ability to “hack back” and potentially do malicious harm to those responsible for data theft appears attractive.   Indeed Patrick Lin argues we ought to consider a cyber version of “stand your ground” where an individual is authorized to defend her network, data or computer. Lin also thinks that such a law may reduce the likelihood of cyberwar because one would not need to engage or even to consult with the state, thereby implicating it in “war crimes.” As Lin states “a key virtue of “Stand Your Cyberground” is that it avoids the unsolved and paralyzing question of what a state’s response can be, legally and ethically, against foreign-based attacks.”

Yet this seems to be the opposite approach to take, especially given the nature of private enforcement, state sovereignty and responsibility. States may be interested in private companies defending their own networks, but one of the primary purposes of a state is to provide for public—not private—law enforcement.   John Locke famously quipped in his 2nd Treatise that the problem of who shall judge becomes an “inconvenience” in the state of nature, thereby giving rise to increased uses of force, then war, and ultimately requires the institution of public civil authority to judge disputes and enforce the law. Cyber “stand your ground” or private hack backs places us squarely back in Locke’s inconvenient state.

Moreover, it runs contrary to the notion of state sovereignty. While many might claim that the Internet and the cyber domain show the weakness in sovereignty, they do not do away with it. Indeed, if we are to learn anything from the history of private enforcement and state jurisdiction, sovereignty requires that the state sanction such behavior. The state would have to issue something tantamount to a letter of marque and reprisal. It would have to permit a private individual or company to seek recompense for its damage or data lost. Yet this is, of course, increasingly difficult for at least two reasons. The first is attribution. I will not belabor the point about the difficulty of attribution, which Lin seems to dismiss by stating that “the identities of even true pirates and robbers–or even enemy snipers in wartime–aren’t usually determined before the counterattack; so insisting on attribution before use of force appears to be an impossible standard.” True attribution for cyber attacks is a lengthy and time-consuming process, often requiring human agents on the ground, and it is not merely about tracing an IP address to a botnet.  True identities are hard to come by, and equating a large cyber attack to a sniper is unhelpful. We may not need to know the social security number of a sniper, but we are clear that the person with the gun in the bell-tower is the one shooting at us, and this permits us to use force in defense.   With a botnet or a spoofed IP address, we are uncertain where the shots are really coming from. Indeed, it makes more sense to think of it like hiring a string of hit men, each hiring a subcontractor, and we are trying to find out who we have a right of self-defense against; is it the person hiring or the hit men or both?

Second, even if we could engage a cyber letter of marque we would have to have some metric to establish a proportionate cyber counter-attack.   Yet what are identities, credit card numbers, or other types of “sensitive data” worth? What if they never get used? Is it then merely the intrusion? Proportionality in this case is not a cut and dry issue.

Finally, if we have learned anything about the history or letters of marque and reprisal, then it is that they went out of favor. States realized that private enforcement, which then turned to public reprisals during the 18th to early 20th centuries, merely encouraged more force in international affairs. Currently the modern international legal system calls acts that are coercive, but not uses of force (i.e. acts that would violate Article 2(4) of the United Nations Charter), countermeasures. The international community and individual states not longer issue letters of marque and reprisal. Instead, when states have their rights violated (or an ‘internationally wrongful act’ taken against them), they utilize arbitration or countermeasures to seek redress. For a state to take lawful countermeasures, however, requires that it determine the responsible state for the wrongful act in question. Yet cyber attacks, if we are to rely on what the professional cybersecurity experts tell us, are sophisticated in that they hide their identities and origins. Moreover, even if one finds out the origin of the attack, this may be insufficient to ground a state’s responsibility for the act. There is always the deniability that the state issued a command or hired a “cyber criminal gang.” Thus countermeasures against a state in this framework may be illegal.

What all this means is that if we do not want ignore current international law, or the teachings of history, we cannot condone private companies “hacking back.” The only way one could condone it is for the state to legalize it, and if this were the case, then it would be just like the state issuing letters of marque and reprisal. Yet by legalizing such a practice, it may open up those states to countermeasures by other states. Given that most of the Internet traffic goes through the United States (US), that means that many “attributable” attacks will look like they are coming from the US.   This in turn means that many states would then have reason to cyber attack the US, thereby increasing and not decreasing the likelihood of cyberwar.   Any proposal to condone retaliatory private enforcement in cyberspace should, therefore, be met with caution.

ISIS, Syria, the Rebels and the US-Led Coalition: What Governs Who?

In a phone call today with a friend working on issues pertaining to the Responsibility to Protect (R2P), an interesting question arose. In particular, what types of conflict are going on with the fight against ISIS? My friend wanted to draw attention to the R2P aspects of the crisis, and whether the “intervention” on the side of the US was just according to these standards. While this is certainly an interesting question, I think it points us in the direction of a larger set of questions regarding the nature of the conflict itself. That is, what are the existing laws with which we ought to view the unfolding situation inside Syria? The complexity of the situation, while definitely a headache for strategists and politicians, is going to become equally difficult for international lawyers too. In particular the case has at least two different bodies of law at work, as well as laws pertaining to R2P crimes. Thus any action within Syria against ISIS, or Al-Qaeda, or Assad, or the rebels will have to be dealt with relationally.

Let us look to the case. Syria has been experiencing civil war for three years. Assad’s violations the rights of his people mean that he has manifestly failed to uphold the Responsibility to Protect Doctrine. R2P requires that states hold the primary responsibility to protect their peoples from genocide, ethnic cleansing, war crimes and crimes against humanity. Given Assad’s use of chemical weapons and cluster munitions, as well as targeting civilian populations, he has clearly committed war crimes and crimes against humanity. That Assad has employed the Shabiha, a private paramilitary force, to engage in killing means that he has also more than likely engaged in ethnic cleansing as well. In a perfect world, the Security Council would have acted in a “timely and decisive manner” to stop such abuses, and would have referred the case to the International Criminal Court (ICC) for prosecution. Of course, in May of this year, 53 countries urged the Security Council to refer the situation to the ICC. A mere two days later, Russia and China blocked the referral to the ICC by utilizing their permanent veto powers.   Three years of bloodshed, civil breakdown, hundreds of thousands dead, and three million of refugees, it is too clear that there was no desire to intervene in the crisis.   Thus we can say that there is an ongoing R2P crisis, and that Assad—as leader of the government of Syria—ought to be held to account for these acts. Moreover, there is a failure of the international community to live up to its obligations (as it voluntarily incurred under the 2005 World Summit Outcome Document).

The sheer destruction and violence inside Syria is what permitted the rise of ISIS. This seems an indisputable fact.   The group capitalized on the civil war and breakdown, the tensions between and factionalization of the Syrian rebel groups, and the international community’s reluctance to engage Assad.   Thus until ISIS pushed into Iraq, the international community would probably have let it be. Moreover, international law would have deemed the issue one of a non-international armed conflict.   However, once ISIS set its sights on the Mosul Dam, the international community began to wake up.

With this act, ISIS transformed the non-international armed conflict into a two-dimensional one. In other words, it added an international dimension too. Thus as the fighting between the rebels and the Assad regime continued (and continues) to be a non-international armed conflict, but the fighting of ISIS in Iraq meant that ISIS-Iraq-Kurd conflict is international. If one doubts this reading, then it would have at least become a transnational armed conflict at the very least, but because ISIS targeted Iraqi infrastructure, it seems more likely that this single act transformed the conflict into an international one.

Now that the US and other regional powers have entered the fray, it is most definitely an international armed conflict – between ISIS and these states. However, we must still remember that the civil war between Assad and the various rebel fighters is also still ongoing (as well, presumably between ISIS vs. Assad). Thus there is still a non-international armed conflict here too. And, let us not forget, R2P and Assad!

What does this all mean? Well, in short it means that the only way to tell which set of laws applies is to look at the relation of the parties at any given moment. The casuistry here will become the all-important determining factor. For example, if the US trains and arms “moderate” Syrian rebels, one would have to look at the particular operation to determine which set of laws applies. Is the operation one undertaken in support or in concert with the US-led coalition against ISIS? Yes? Then international humanitarian law applies. Is the operation undertaken by these trained and armed rebels one against the Assad regime? Yes? Well, then this may or may not be a non-international armed conflict. The International Court of Justice, for instance, holds that in the case of third party intervention in support of a rebel group, the third party needs to have “overall control” of the rebel group for that conflict to be considered “internationalized.” Given the different rebel groups, this could become a daunting analysis. Is control of one sufficient to say it is for “all?” Or just this one group?

These little details matter because the law of international armed conflict is much more robust than the law pertaining to non-international armed conflict. As the International Committee of the Red Cross notes:

“Although the existence of so many provisions and treaties may appear to be sufficient, the treaty rules applicable in non-international armed conflicts are, in fact, rudimentary compared to those applicable in international armed conflicts. Not only are there fewer of these treaty rules, but they are also less detailed and, in the case of Additional Protocol II, their application is dependent on the specific situations described above.”

In other words, there are gaps in the protection of rights, persons, property and the environment relating to non-international armed conflict that do not exist in international humanitarian law (i.e. international armed conflict).   Thus the case of ISIS challenges the international community in more ways than one. It is not that there are not laws applying to these conflicts, but that the conflicts are so convoluted that the states and parties to this conflict, as well as potential international prosecutors, will rely on so much more circumstantial evidence to sort out the details about what is permissible and when. This, however, is not something likely to happen ex ante in targeting operations, training and arming. I fear that while there are overlapping jurisdictions of rules and laws here, the convoluted nature will engender an even greater realm of permissiveness and the parties to the conflicts will end up transferring more risk and harm to the bystanders. Civilians always suffer, to be sure, but the laws of war are supposed to mitigate that suffering. If the laws of war are convoluted because of the complexity of the actors and their relationships, then this will have greater deleterious effects on the lives and rights of noncombatants.

Monstermind or the Doomsday Machine? Autonomous Cyberwarfare

Today in Wired magazine, James Bamford published a seven-page story and interview with Edward Snowden. The interview is another unique look into the life and motivations of one of America’s most (in)famous whistleblowers; it is also another step in revealing the depth and technological capacity of the National Security Agency (NSA) to wage cyberwar. What is most disturbing about today’s revelations is not merely what it entails from a privacy perspective, which is certainly important, but from an international legal and moral perspective.   Snowden tells us that the NSA is utilizing a program called “Monstermind.” Monstermind automatically hunts “for the beginnings of a foreign cyberattack. [… And then] would automatically block it from entering the country – a “kill” in cyber terminology.” While this seems particularly useful, and morally and legally unproblematic, as it is a defensive asset, Monstermind adds another not so unproblematic capability: autonomously “firing back” at the attacker.

Snowden cites two problems with this new tactic. First, he claims that it would require access to “all [Internet] traffic flows” coming in and outside of the US. This means in turn that the NSA is “violating the Fourth Amendment, seizing private communications without a warrant, without probable cause or even a suspicion of wrongdoing. For everyone, all the time.” Second, he thinks it could accidentally start a war. More than this, it could accidentally start a war with an innocent third party because an attacking party could spoof the origin of the attack to make it look like another country is responsible. In cyber jargon, this is the “attribution problem” where one cannot with certainty attribute an attack to a particular party.

I however would like to raise another set of concerns in addition to Snowden’s: that the US is knowingly violating international humanitarian law (IHL) and acting against just war principles. First, through automated or autonomous responses, the US cannot by definition consider or uphold Article 52 of Additional Protocol I of the Geneva Conventions. It will violate Article 52 on at least two grounds. First, it will violate Article 52(2), which requires states to limit their attacks to military objectives. These include “those objects which by their nature, location, purpose or use make an effective contribution to military action and whose total or partial destruction, capture or neutralization, in the circumstances ruling at the time, offers a definite military advantage.” While one might object that the US has not ratified Additional Protocol I, it is still widely held as a customary rule. Even if one still holds this is not enough, we can still claim that autonomous cyber attacks violate US targeting doctrine (and thus Article 52(2)) because this doctrine requires that any military objective be created by a military commander and vetted by a Judge Advocate General, ensuring that targeting is compliant with (IHL). That a computer system strikes “back” without direction from a human being undermines the entire targeting process. Given that the defensive capacity to “kill” the attack is present, there seems no good reason to counter-strike without human oversight. Second, striking back at an ostensibly “guilty” network will more than likely have significant effect on civilian networks, property and functionality. This would violate the principle of distinction, laid down in Article 52(1).

If one still wanted to claim that the NSA is not a military unit, and any “strike back” cyber attack is not one taken under hostilities (thereby not being governed under IHL), then we would still require an entire theory (and body of law) of what constitutes a legitimate use of force in international law that does not violate the United Nations charter, particularly Article 2(4), which prohibits states from using or threatening to use force. One might object that a cyber attack that does not result in property damage or the loss of life is not subject to this prohibition. However, taking the view that such an attack does not rise to the level of an armed attack in international law (see for instance the Tallinn Manual), does not mean that such an attack is not a use of force, and thus still prohibited. Furthermore, defensive uses of force in international law are permissible only if they rise to the level of an armed attack (Article 51).

Second, autonomous cyber attacks cannot satisfy the just war principles of proportionality. The first proportionality principle has to do with ad bellum considerations of whether or not it is permissible to go to war. While we may view the “strike” as not engaging in war, or that it is a different kind of war, is another question for another day. Today, however, all we ought to consider is that a computer program automatically responds in some manner (which we do not know) to an attack (presumably preemptively). That response may trigger an additional response from the initial attacker – either automatically or not. (This is Snowden’s fear of accidental war.) Jus ad bellum proportionality requires a balancing of all the harms to be weighed against the benefits of engaging in hostilities. Yet, this program vitiates the very difficult considerations required. In fact, it removes the capacity for such deliberation.

The second proportionality principle that Monstermind violates is the in bello version. This version requires that one use the least amount of force necessary to achieve one’s goals. One wants to temper the violence used in the course of war, to minimize destruction, death and harm.   The issue with Monstermind is that prior to any identification of attack, and any “kill” of an incoming attack, someone has to create and set into motion the second step of “striking back.” However, it is very difficult, even in times of kinetic war, to proportionately respond to an attack. Is x amount of force enough? Is it too much? How can one preprogram a “strike back attack” to a situation that may or may not fit the proportionality envisioned by an NSA programmer at any given time? Can a programmer put herself into a position to envision how she would act at a given time to a particular threat (this is what Danks and Danks (2013) identify as the “future self-projection bias). Moreover, if this is a “one-size-fits-all” model of a “strike back” then that entails that it cannot by definition satisfy in bello proportionality because each situation will require a different type of response to ensure that one is using the minimal amount of force possible.

What all of this tells us, is that the NSA is engaging in cyberwar, autonomously, automatically and without our or our adversaries’ knowledge. In essence it has created not Monstermind, but the Doomsday Machine. It has created a machine that possesses an “automated and irrevocable decision making process which rules out human meddling” and thus “is terrifying, simple to understand, and completely credible and convincing” now that we know about it.

Do Not Despair: Russian Intervention and International Law

UnknownRussia’s military intervention in Ukraine naturally prompted a lot talk about the limits of international law. Eric Posner noted: “ 1. Russia’s military intervention in Ukraine violates international law. 2. No one is going to do anything about it.” Julian Ku argued: “International law can be, and often is, a very important tool for facilitating international and transnational cooperation. But it is not doing much to resolve to Ukraine crisis, and international lawyers need to admit that.” For Ku, the current crisis supports the claims of Rationalist law-skeptics, international law works when legal requirements align with self-interest. Many others, including a good portion of my students see the failure of international law in Ukraine. Continue reading

Conquest by other means, Ukraine edition

Over at the Monkey Cage, Henry Farrell suggests that President Obama is using the OSCE to give Putin an exit strategy. Farrell writes:

Obama’s “phone call with Putin on Saturday suggests that the United States wants to invoke the old-style OSCE. It notes that Russia’s armed intervention is inconsistent with Russia’s commitments under the Helsinki Final Act (the agreement that established the OSCE), calls for “the dispatch of international observers under the auspices of the United Nations Security Council or the Organization for Security and Cooperation in Europe (OSCE)…

If Putin wants an “exit strategy,” Farrell continues, this is it: “There is no reason why the OSCE could not help broker compromises over new elections and push the Ukrainian government to guarantee the rights of Russian speakers in Ukraine.”

My question to Farrell is, is this Putin’s possible exit strategy, or the United States/EU’s?

Continue reading

International Institutions Mobilize Opponents Too

Members of international institutions typically honor their commitments. But that does not, by itself, tell us much. States are unlikely to join institutions that require them to do things they have no intention of doing. Indeed, some argue that institutions merely act to screen out those least likely to comply. Others, however, have argued that institutions do in fact constrain states – that they are not mere epiphenomena. One prominent mechanism through which institutions are thought to alter state behavior is by mobilizing pro-compliance groups domestically. Institutions may lack enforcement capable, after all, but few governments are entirely insensitive to domestic pressure.

But, as Stephen Chaudoin cogently observes in this working paper, those who stand to lose if the government adopts the institution’s preferred policy are unlikely to give in without a fight. And such groups virtually always exist; if they did not there’d be little need for institutions to promote cooperation in the first place. Put differently, while WTO rulings may raise awareness about the effects of tariffs and Amnesty International might draw attention to human rights abuses, the net effect of such efforts might simply be to increase the amount of effort those advantaged by the status quo invest in defending it.

Continue reading

The Scarcity of Politics in Cosmopolitan Theory: Part I

globepass-210x300

Syria has raised several questions that pertain to morality, legality, and strategy in international relations.  Discussed extensively on the DuckOpinio Juris, The Monkey Cage, and elsewhere the situation in Syria has sparked a valuable debate on critical issues, both old and new. I would like to touch upon the implications of Syria for Cosmopolitanism. I think Syria has again highlighted the core dilemma of Cosmopolitan theory: the scarcity of politics. Protecting inalienable human rights requires applying normative cosmopolitan principles in practice. Application necessitates a departure from cosmopolitan normative theory towards cosmopolitan practice. And practice is inevitably political. Questions about when and how R2P applies, when intervention without Security Council authorization may be justified, and when a state looses its sovereign privileges when the government attacks its own people are about applied normativity. Cosmopolitan theory still offers relatively little on the politics of norm implementation.

At its, core, Cosmopolitanism asserts that there are “moral obligations owed to all human beings based solely on our humanity alone, without reference to race, gender, nationality, ethnicity, culture, religion, political affiliation, state citizenship, or other communal particularities (Brown and Held).”  Taking the inherent moral worth of the individual as its starting point, Legal Cosmopolitanism calls for the institutionalization of key cosmopolitan normative principles. Versions of Cosmopolitanism abound. I bracket these debates for the time being and recommend Catherine Lu’s article for a useful and critically-informed review. But there is consensus among scholars that honoring and protecting the individual is the core principle shared by Cosmopolitans of all stripes.

From Cicero to Kant, Pogge to Taylor, a lot has been said about the promises as well as perils of Legal Cosmopolitanism. But as As Garrett Wallace Brown notes in a recent article, we still have not moved “from cosmopolitan normative theory to cosmopolitan legal practice.” I think this is one reason why Cosmopolitanism seems to have little to say on the implementation of R2P. Thou shalt not kill may indeed be a universal norm. Yet how it is applied in practice by people and institutions varies. Shibley Telhami noted that the U.S. should not expect a “thank you” from the Arab world for intervening. This does not mean the Arab public opinion supports the use of chemical weapons on civilian populations. But it does suggest that the Arab world has a different understanding of how civilians need to be protected and criminal actions punished.

Cosmopolitan theory generally has a hard time tackling normativity in practice. It talks about our obligations towards global compatriots and calls for reforming existing international organizations to institutionalize cosmopolitan ideals. Yet it does not always tell us what our obligations are in practice and how they relate to our other moral duties, including those to the nation. It also gives us little policy guidance on institutional reform and on the role of the state in cosmopolitics. And the political implications of applied cosmopolitanism for democracy, moral diversity, and individual autonomy, to name a few important issues, sometimes remain unexplored. Of course, progress has been made and there is growing interest in applied global normativity. But I think IR scholars could offer additional insights that will inform theory and facilitate empirical research. I will sketch out some of my thoughts in Part II of this discussion. (Image source: http://criticalworld.net/cosmopolitanism/M. Roberts)

What Do International Law and Norms Say About Burning People Alive?

One line of discussion this past week has been whether it makes any kind of moral sense to think that  death by chemical weapon is so much worse than death by “conventional” weapons. Video imagery captured by BBC in the aftermath of another horrific massacre in Syria yesterday throws this into stark relief. At least ten children burned to death and scores others were left with horrifying injuries after a flammable substance was dropped on a school playground yesterday. Continue reading

Intervention to Punish? Or to Protect?

stopsyriaTwo kinds of military intervention are being discussed and conflated by political elites (like Nicholas Kristof) and international diplomats. The first is an enforcement operation to punish a state for violating a widespread and nearly universal global prohibition norm against the use of chemical weapons. This is what Kristof refers to in the title of his Times op-ed, “Reinforce a Norm in Syria.”  The second is a humanitarian operation to protect civilians against a predatory government. This is what Kristof means when he compares proposed military strikes in Syria to intervention that happened in Bosnia and Kosovo and (tragically) didn’t happen in Rwanda.

Well, it’s useful to clarify which we are talking about since both kinds of operation involve very different tactics and different kinds of legal and moral reasoning. I discuss both at Foreign Affairs this morning:

[If punishing norm violators is the goal], the appropriate course of action would be to, first, independently verify who violated it…. Second, the United States would have to consider a range of policy options for affirming, condemning, and lawfully punishing the perpetrator before resorting to force, particularly unlawful force… Third, should the United States decide on military action, with or without a UN Security Council resolution, it would need to adhere to international norms regulating the use of specific weapons in combat.

But such a strike should not be confused with military action to protect civilians.   Continue reading

If Syria Used WMD, It Violated International Law. But So Would a US Intervention.

In the New York Times yesterday, Northwestern University political scientist Ian Hurd lays down the law on Syria and intervention:

As a legal matter, the Syrian government’s use of chemical weapons does not automatically justify armed intervention by the United States… Syria is a party to neither the Biological Weapons Convention of 1972 nor the Chemical Weapons Convention of 1993… Syria is a party to the Geneva Protocol, a 1925 treaty that bans the use of toxic gases in wars. But this treaty was designed after World War I with international war in mind, not internal conflicts.

[And] the conventions also don’t mean much unless the Security Council agrees to act. The United Nations Charter… demands that states refrain “from the threat or use of force against the territorial integrity or political independence of any state.” The use of force is permitted when authorized by the Security Council or for self-defense  — but not purely on humanitarian grounds.

Of course ethics, not only laws, should guide policy decisions…  if the White House takes international law seriously — as the State Department does — it cannot try to have it both ways. It must either argue that an “illegal but legitimate” intervention is better than doing nothing, or assert that international law has changed — strategies that I call “constructive noncompliance.” In the case of Syria, I vote for the latter.

Hurd is right about a great many things: that Syria’s obligations under treaty law are weaker than people want to think; that there are legal tensions here that the US cannot and shouldn’t try to wish away; and that a decision must be made between doing something and doing something lawfully; and that the robustness of international norms around both R2P and chemical weapons are at stake in how the US and UK frame the discussion.

But I think Hurd is both under-stating the case about Syria’s international legal obligations, and over-stating the case about US options in framing a potential military intervention. International law indeed is “changing” – but the relevant changes he describes apply to Syria’s responsibility to its civilians, not to the US’ right to reinterpret the UN Charter. And ultimately, as he points out, even Syria’s violations of law don’t make it lawful for the US to intervene without a Security Council resolution – however ethically right such an intervention may be. The two are really separate legal questions so I’ll address them separately below. Continue reading

Resistance is Not Futile.

A claim common among opponents of a treaty ban on autonomous weapon systems (AWS) is that treaties banning weapons don’t work – suggesting efforts to arrest the development of AWS are an exercise in futility. Now this claim has been picked up uncritically by the editors at Bloomberg, writing in the derisively titled, “No Really, How Do We Keep Robots From Destroying Humans?”:

“Bans on specific weapons systems — such as military airplanes or submarines — have almost never been effective in the past. Instead, legal prohibitions and ethical norms have arisen that effectively limit their use. So a more promising approach might be to adapt existing international law to govern autonomous technology — for instance, by requiring that such weapons, like all others, can’t be used indiscriminately or cause unnecessary suffering.”

borgThe editors point out a valid distinction between weapons that are banned outright versus more generic questions of how the use of a specific weapon may or may not be lawful (the principles of proportionality and distinction apply to the use of all weapons). But they also make a conceptual and a causal error, and in so doing woefully underestimate the political power of comprehensive treaty bans. Continue reading

War Law, the “Public Conscience” and Autonomous Weapons

In the Guardian this morning, Christof Heyns very neatly articulates  some of the legal arguments with allowing machines the ability to target human beings autonomously – whether they can distinguish civilians and combatants, make qualitative judgments, be held responsible for war crimes. But after going through this back and forth, Heyns then appears to reframe the debate entirely away from the law and into the realm of morality:

The overriding question of principle, however, is whether machines should be permitted to decide whether human beings live or die.

But this “question of principle” is actually a legal argument itself, as Human Rights Watch pointed out last November in its report Losing Humanity (p. 34): that the entire idea of out-sourcing killing decisions to machine is morally offensive, frightening, even repulsive, to many people, regardless of utilitarian arguments to the contrary: Continue reading

War Law, the "Public Conscience" and Autonomous Weapons

In the Guardian this morning, Christof Heyns very neatly articulates  some of the legal arguments with allowing machines the ability to target human beings autonomously – whether they can distinguish civilians and combatants, make qualitative judgments, be held responsible for war crimes. But after going through this back and forth, Heyns then appears to reframe the debate entirely away from the law and into the realm of morality:

The overriding question of principle, however, is whether machines should be permitted to decide whether human beings live or die.

But this “question of principle” is actually a legal argument itself, as Human Rights Watch pointed out last November in its report Losing Humanity (p. 34): that the entire idea of out-sourcing killing decisions to machine is morally offensive, frightening, even repulsive, to many people, regardless of utilitarian arguments to the contrary: Continue reading

Human Rights Treaties are Like Virginity Pledges, Part Deux

A little over a month ago, I wrote about the growing academic literature concerning human rights treaties and their lack of influence on human rights practices.  Based on my own experiences growing up in parts of the U.S. where it’s assumed we can “[Rebuild] Our Culture One Purity Ball at a Time,” I likened human rights treaties to virginity pledges, saying that “in most circumstances, these human rights “pledges” don’t work to improve human rights practices.   In some circumstances, they can actually lead to a worsening of governmental human rights practices.”  There is a brand-spankin-new forthcoming article at American Journal of Political Science by Yonatan Lupu of George Washington University that may indicate my previous conclusion was overstated: when fully accounting for state preferences in treaty commitments, Lupu does not find any evidence that treaties make things worse.  This is good news for human rights advocates everywhere and very important for human rights/treaty scholarship!  Lupu’s article definitely deserves your attention.

Continue reading

Podcast No. 13 – A Conversation with Nick Onuf (mp3)

The thirteenth Duck of Minerva podcast features Nicholas Onuf. Nick is one of the “founding parents” of contemporary constructivism. His book, World of Our Making: Rules and Rule in Social Theory and International Relation  — which has been reissued by Routledge — introduced the term to describe an approach to the study of world politics. Continue reading

Podcast No. 13 – A Conversation with Nick Onuf (mp3)

The thirteenth Duck of Minerva podcast features Nicholas Onuf. Nick is one of the “founding parents” of contemporary constructivism. His book, World of Our Making: Rules and Rule in Social Theory and International Relation  — which has been reissued by Routledge — introduced the term to describe an approach to the study of world politics. Continue reading

Podcast No. 13 – A Conversation with Nick Onuf (m4a)

The thirteenth Duck of Minerva podcast features Nicholas Onuf. Nick is one of the “founding parents” of contemporary constructivism. His book, World of Our Making: Rules and Rule in Social Theory and International Relation  — which has been reissued by Routledge — introduced the term to describe an approach to the study of world politics.

The podcast is wide-ranging — part of oral history, part interview, part discussion — such that I’ve had difficulty figuring out how to insert chapters. If you’re listening via m4a, you’ll see that the podcast has only a few chapter titles. “Enter Constructivism,” for example, contains not only information about World of Our Making but also about the state of the field in the 1980s, the rise of liberal institutionalism, and so on.

Continue reading

Older posts

© 2015 Duck of Minerva

Theme by Anders NorenUp ↑