Today in Wired magazine, James Bamford published a seven-page story and interview with Edward Snowden. The interview is another unique look into the life and motivations of one of America’s most (in)famous whistleblowers; it is also another step in revealing the depth and technological capacity of the National Security Agency (NSA) to wage cyberwar. What is most disturbing about today’s revelations is not merely what it entails from a privacy perspective, which is certainly important, but from an international legal and moral perspective. Snowden tells us that the NSA is utilizing a program called “Monstermind.” Monstermind automatically hunts “for the beginnings of a foreign cyberattack. [… And then] would automatically block it from entering the country – a “kill” in cyber terminology.” While this seems particularly useful, and morally and legally unproblematic, as it is a defensive asset, Monstermind adds another not so unproblematic capability: autonomously “firing back” at the attacker.
Snowden cites two problems with this new tactic. First, he claims that it would require access to “all [Internet] traffic flows” coming in and outside of the US. This means in turn that the NSA is “violating the Fourth Amendment, seizing private communications without a warrant, without probable cause or even a suspicion of wrongdoing. For everyone, all the time.” Second, he thinks it could accidentally start a war. More than this, it could accidentally start a war with an innocent third party because an attacking party could spoof the origin of the attack to make it look like another country is responsible. In cyber jargon, this is the “attribution problem” where one cannot with certainty attribute an attack to a particular party.
I however would like to raise another set of concerns in addition to Snowden’s: that the US is knowingly violating international humanitarian law (IHL) and acting against just war principles. First, through automated or autonomous responses, the US cannot by definition consider or uphold Article 52 of Additional Protocol I of the Geneva Conventions. It will violate Article 52 on at least two grounds. First, it will violate Article 52(2), which requires states to limit their attacks to military objectives. These include “those objects which by their nature, location, purpose or use make an effective contribution to military action and whose total or partial destruction, capture or neutralization, in the circumstances ruling at the time, offers a definite military advantage.” While one might object that the US has not ratified Additional Protocol I, it is still widely held as a customary rule. Even if one still holds this is not enough, we can still claim that autonomous cyber attacks violate US targeting doctrine (and thus Article 52(2)) because this doctrine requires that any military objective be created by a military commander and vetted by a Judge Advocate General, ensuring that targeting is compliant with (IHL). That a computer system strikes “back” without direction from a human being undermines the entire targeting process. Given that the defensive capacity to “kill” the attack is present, there seems no good reason to counter-strike without human oversight. Second, striking back at an ostensibly “guilty” network will more than likely have significant effect on civilian networks, property and functionality. This would violate the principle of distinction, laid down in Article 52(1).
If one still wanted to claim that the NSA is not a military unit, and any “strike back” cyber attack is not one taken under hostilities (thereby not being governed under IHL), then we would still require an entire theory (and body of law) of what constitutes a legitimate use of force in international law that does not violate the United Nations charter, particularly Article 2(4), which prohibits states from using or threatening to use force. One might object that a cyber attack that does not result in property damage or the loss of life is not subject to this prohibition. However, taking the view that such an attack does not rise to the level of an armed attack in international law (see for instance the Tallinn Manual), does not mean that such an attack is not a use of force, and thus still prohibited. Furthermore, defensive uses of force in international law are permissible only if they rise to the level of an armed attack (Article 51).
Second, autonomous cyber attacks cannot satisfy the just war principles of proportionality. The first proportionality principle has to do with ad bellum considerations of whether or not it is permissible to go to war. While we may view the “strike” as not engaging in war, or that it is a different kind of war, is another question for another day. Today, however, all we ought to consider is that a computer program automatically responds in some manner (which we do not know) to an attack (presumably preemptively). That response may trigger an additional response from the initial attacker – either automatically or not. (This is Snowden’s fear of accidental war.) Jus ad bellum proportionality requires a balancing of all the harms to be weighed against the benefits of engaging in hostilities. Yet, this program vitiates the very difficult considerations required. In fact, it removes the capacity for such deliberation.
The second proportionality principle that Monstermind violates is the in bello version. This version requires that one use the least amount of force necessary to achieve one’s goals. One wants to temper the violence used in the course of war, to minimize destruction, death and harm. The issue with Monstermind is that prior to any identification of attack, and any “kill” of an incoming attack, someone has to create and set into motion the second step of “striking back.” However, it is very difficult, even in times of kinetic war, to proportionately respond to an attack. Is x amount of force enough? Is it too much? How can one preprogram a “strike back attack” to a situation that may or may not fit the proportionality envisioned by an NSA programmer at any given time? Can a programmer put herself into a position to envision how she would act at a given time to a particular threat (this is what Danks and Danks (2013) identify as the “future self-projection bias). Moreover, if this is a “one-size-fits-all” model of a “strike back” then that entails that it cannot by definition satisfy in bello proportionality because each situation will require a different type of response to ensure that one is using the minimal amount of force possible.
What all of this tells us, is that the NSA is engaging in cyberwar, autonomously, automatically and without our or our adversaries’ knowledge. In essence it has created not Monstermind, but the Doomsday Machine. It has created a machine that possesses an “automated and irrevocable decision making process which rules out human meddling” and thus “is terrifying, simple to understand, and completely credible and convincing” now that we know about it.
0 Comments
Trackbacks/Pingbacks