Tag: cyber war

Kill Webs: The Wicked Problem of Future Warfighting

The common understanding in military circles is that the more data one has, the more information one possess.  More information leads to better intelligence, and better intelligence produces greater situational awareness.  Sun Tzu rightly understood this cycle two millennia ago: “Intelligence is the essence in warfare—it is what the armies depend upon in their every move.” Of course, for him, intelligence could only come from people, not from various types of sensor data, such as radar signatures or ship’s pings.

Pursuing the data-information-intelligence chain is the intuition behind the newly espoused “Kill Web” concept.  Unfortunately, however, there is scant discussion about what the Kill Web actually is or entails.  We have glimpses of the technologies that will comprise it, such as integrating sensors and weapons systems, but we do not know how it will function or the scope of its vulnerabilities.

Continue reading

Strategic Surprise? Or The Foreseeable Future

When the Soviets launched Sputnik in 1957, the US was taken off guard.  Seriously off guard.  While Eisenhower didn’t think the pointy satellite was a major strategic threat, the public perception was that it was.  The Soviets could launch rockets into space, and if they could do that, they could easily launch nuclear missiles at the US.  So, aside from a damaged US ego about losing the “space race,” the strategic landscape shifted quickly and the “missile gap” fear was born.

The US’s “strategic surprise” and the subsequent public backlash caused the US to embark on a variety of science and technology ventures to ensure that it would never face such surprise again.  One new agency, the Advanced Research Projects Agency (ARPA), was tasked with  generating strategic surprise – and guarding against it.  While ARPA changed into DARPA (Defense Advanced Projects Agency) in the 1970s, its mission did not change.

DARPA has been, and still is, the main source of major technological advancement for US defense, and we would do well to remember its primary mission: to prevent strategic surprise.  Why one might ask is this important to the students of international affairs?  Because technology has always been one of the major variables (sometimes ignored) that affects relations between international players.   Who has what, what their capabilities are, whether they can translate those capacities into power, if they can reduce uncertainty and the “fog and friction” of war, whether they can  predict future events, if they can understand their adversaries, and on and on the questions go.  But at base, we utilize science and technology to pursue our national interests and answer these questions.

I recently brought attention to the DoD’s new “Third Offset Strategy” in my last post.  This strategy, I explained, is based on the assumption that scientific achievement and the creation of new weapons and systems will allow the US to maintain superiority and never fall victim to strategic surprise (again).  Like the first and second offsets, the third wants to leverage advancements in physics, computer science, robotics, artificial intelligence and electrical and mechanical engineering to “kick the crap” out of any potential adversary.

Yet, aside from noting these requirements, what exactly, would the US need to do to “offset” the threats from Russia, China, various actors in the Middle East, terrorists (at home and abroad), and any unforeseen or “unknown unknowns?” I think I have a general idea, and if I am at all or even partially correct, we need to have a public discussion about this now.

Continue reading

Deterrence in Cyberspace and the OPM Hack

I have yet to weigh in on the recent hack on the Office of Personnel Management (OPM).   Mostly this is due to two reasons.  First is the obvious one for an academic: it is summer! But the second, well, that is due to the fact that as most cyber events go, this one continues to unfold. When we learned of the OPM hack earlier this month, the initial figures were 4 million records. That is, 4 million present and former government employees’ personal records were compromised. This week, we’ve learned that it is more like 18 million.   While some argue that this hack is not something to be worried about, others are less sanguine.   The truth of the matter is, we really don’t know. Coming out on one side or the other is a bit premature.   The hack could be state-sponsored, where the data is squirreled away in a foreign intelligence agency. Or it could be state-sponsored, but the data could be sold off to high bidders on the darknet. Right now, it is too early to tell.

What I would like to discuss, however, is what the OPM hack—and many recent others like the Anthem hack—show in relation to thinking about cybersecurity and cyber “deterrence.”     Deterrence as any IR scholar knows is about getting one’s adversary to not undertake some action or behavior.   It’s about keeping the status quo. When it comes to cyber-deterrence, though, we are left with serious questions about this simple concept. Foremost amongst them is: Deterrence from what? All hacking? Data theft? Infrastructure damage? Critical infrastructure damage? What is the status quo? The new cybersecurity strategy released by the DoD in April is of little help. It merely states that the DoD wants to deter states and non-state actors from conducting “cyberattacks against U.S. interests” (10).   Yet this is pretty vague. What counts as a U.S. interest?

Continue reading

Not What We Bargained For: The Cyber Problem

Last week the New America Foundation hosted its launch for an interdisciplinary cybersecurity initiative. I was fortunate enough to be asked to attend and speak, but the real benefit was that I was afforded an opportunity to listen to some really remarkable people in the cyber community discuss cybersecurity, law, and war.   I listened to a few very interesting comments. For instance, Assistant Attorney General, John Carlin, claimed that “we” (i.e. the United States) have “solved the attribution problem, and the National Security Agency Director & Cyber Command (CYBERCOM) Commander, Admiral Mike Rogers, say that he will never act outside of the bounds of law in his two roles.   These statements got me to thinking about war, cyberspace and international relations (IR).

In particular, IR scholars have tended to argue over the definitions of “cyberwar,” and whether and to what extent we ought to view this new technology as a “game-changer” (Clarke and Knake 2010; Rid 2011; Stone 2011; Gartzke 2013; Kello 2013; Valeriano and Maness 2015).   Liff (2012), for instance, argues that cyber power is not a “new absolute weapon,” and it is instead beholden to the same rationale of the bargaining model of war. Of course, the problem for Liff is that the “absolute weapon” he utilizes as a foil for cyber weapons/war is not equivalent in any sense, as the “absolute weapon,” according to Brodie, is the nuclear weapon and so has a different and unique bargaining logic unto itself (Schelling 1977). Conventional weapons follow a different logic (George and Smoke 1974).

Continue reading

SOTU: Cyber What?

In last night’s State of the Union Address, President Obama briefly reiterated the point that Congress has an obligation to pass some sort of legislation that would enable cybersecurity to protect “our networks”, our intellectual property and “our kids.” The proposal appears to be a reiteration that companies share more information with the government in real time about hacks they are suffering. Yet, there is something a bit odd about the President Obama’s cybersecurity call to arms: the Sony hack.

The public attention given over to the Sony hack, from the embarrassing emails about movie stars, to the almost immediate claims from the Federal Bureau of Investigation (FBI) that the attack came from North Korea, to the handwringing over what kind of “proportional” response to launch against the Kim regime, we have watched the cybersecurity soap opera unfold. In what appears as the finale, we now have reports that the National Security Agency (NSA) watched the attack unfold, and that it was really the NSA’s evidence and not that of the FBI that supported President Obama’s certainty that North Korea, and not some disgruntled Sony employee, was behind the attack. Where does this leave us with the SOTU?

First, if we believe that the NSA watched the Sony attack unfold—and did not warn Sony—then no amount of information sharing from Sony would have mattered.   Sony was de facto sharing information with the government whether they permitted it or not. This raises concerns about the extent to which monitoring foreign attacks violates the privacy rights of individuals and corporations.   Was the NSA watching traffic, or was it inside Sony networks too?

Second, the NSA did not stop the attack from happening. Rather, it and the Obama administration let the political drama unfold, and took the opportunity to issue a “proportionate” response through targeted sanctions against some of the ruling North Korean elite. The sanctions are merely additions to already sanctioned agencies and individuals, and so functionally, they are little more than show.   The only sense that I can make of this is that the administration desired to signal publicly to the Kim regime and all other potential cyber attackers that the US will respond to attacks in some manner. This supports Erik Gartzke’s argument that states do not require 100% certainty about who launched an attack to retaliate. If states punish the “right” actor, then all the better, if they do not, then they still send a deterrent signal to those watching. However, if this is so, it is immediately apparent that Sony was scarified to the cyber-foreign-policy gods, and there was a different cost-benefit calculation going on in the White House.

Finally, let’s get back to the Sony hack and the SOTU address. If the US was taking the Sony hack as an opportunity in deterrence, then this means that it allowed Sony to suffer a series of attacks and did nothing to protect them. If this is the case, then the notion that we need more information sharing with the government may be false.   What the government wants is really more permission, more consent, from the companies it is already watching. Protecting the citizens and corporations of the US requires a delicate balance between privacy and security. However, attempting to corrupt ways of maintaining security, such as outlawing encryption only makes citizens and corporations more unsafe and insecure. If the US government really wants to protect the “kids” from cyber criminals, then they should equip those kids with the strongest encryption there is, and teach good cyber practices.

Cyber Letters of Marque and Reprisal: "Hacking Back"

In the thirteenth century, before the rise of the “modern” state, private enforcement mechanisms reigned supreme. In fact, because monarchs of the time had difficulties enforcing laws within their jurisdictions, the practice of private individuals enforcing their rights was so widespread that for the sovereign to be able to “reign supreme” while his subjects simultaneously acted as judge, jury and executioner, the practice of issuing “letters of marque and reprisal” arose. Merchants traveling from town to town or even on the high seas often became the victims of pirates, brigands and thieves. Yet these merchants had no means of redress, especially when they were outside the jurisdiction of their states. Thus the victim of a robbery often sought to take back some measure of what was lost, usually in like property or in proportionate value.

The sovereign saw this practice of private enforcement as a threat to his sovereign powers, and so regulated the practice through the letters of marque. A subject would appeal to his sovereign, giving a description of what transpired and then asking permission to go on a counterattack against the offending party. The trouble was, however, that often the offending party was nowhere to be found. Thus what ended up happening is that the reprisals carried out against an “offending” party usually ended up being carried out against the population or community from which the brigand originated. The effect of this practice, interestingly, was to foster greater communal bonds and ties and cement the rise of the modern state.

One might ask at this point, what do letters of marque and reprisal have to do with cybersecurity? A lot, I think. Recently, the Washington Post reported that there is increasing interest in condoning “hacking back” against cyber attackers. Hacking back, or “active defense,” is basically attempting to trace the origins of an attack, and then gain access to that network or system. With all of the growing concern about the massive amounts of data stolen from the likes of Microsoft, Target, Home Depot, JPMorgan Chase and nameless others, the ability to “hack back” and potentially do malicious harm to those responsible for data theft appears attractive.   Indeed Patrick Lin argues we ought to consider a cyber version of “stand your ground” where an individual is authorized to defend her network, data or computer. Lin also thinks that such a law may reduce the likelihood of cyberwar because one would not need to engage or even to consult with the state, thereby implicating it in “war crimes.” As Lin states “a key virtue of “Stand Your Cyberground” is that it avoids the unsolved and paralyzing question of what a state’s response can be, legally and ethically, against foreign-based attacks.”

Yet this seems to be the opposite approach to take, especially given the nature of private enforcement, state sovereignty and responsibility. States may be interested in private companies defending their own networks, but one of the primary purposes of a state is to provide for public—not private—law enforcement.   John Locke famously quipped in his 2nd Treatise that the problem of who shall judge becomes an “inconvenience” in the state of nature, thereby giving rise to increased uses of force, then war, and ultimately requires the institution of public civil authority to judge disputes and enforce the law. Cyber “stand your ground” or private hack backs places us squarely back in Locke’s inconvenient state.

Moreover, it runs contrary to the notion of state sovereignty. While many might claim that the Internet and the cyber domain show the weakness in sovereignty, they do not do away with it. Indeed, if we are to learn anything from the history of private enforcement and state jurisdiction, sovereignty requires that the state sanction such behavior. The state would have to issue something tantamount to a letter of marque and reprisal. It would have to permit a private individual or company to seek recompense for its damage or data lost. Yet this is, of course, increasingly difficult for at least two reasons. The first is attribution. I will not belabor the point about the difficulty of attribution, which Lin seems to dismiss by stating that “the identities of even true pirates and robbers–or even enemy snipers in wartime–aren’t usually determined before the counterattack; so insisting on attribution before use of force appears to be an impossible standard.” True attribution for cyber attacks is a lengthy and time-consuming process, often requiring human agents on the ground, and it is not merely about tracing an IP address to a botnet.  True identities are hard to come by, and equating a large cyber attack to a sniper is unhelpful. We may not need to know the social security number of a sniper, but we are clear that the person with the gun in the bell-tower is the one shooting at us, and this permits us to use force in defense.   With a botnet or a spoofed IP address, we are uncertain where the shots are really coming from. Indeed, it makes more sense to think of it like hiring a string of hit men, each hiring a subcontractor, and we are trying to find out who we have a right of self-defense against; is it the person hiring or the hit men or both?

Second, even if we could engage a cyber letter of marque we would have to have some metric to establish a proportionate cyber counter-attack.   Yet what are identities, credit card numbers, or other types of “sensitive data” worth? What if they never get used? Is it then merely the intrusion? Proportionality in this case is not a cut and dry issue.

Finally, if we have learned anything about the history or letters of marque and reprisal, then it is that they went out of favor. States realized that private enforcement, which then turned to public reprisals during the 18th to early 20th centuries, merely encouraged more force in international affairs. Currently the modern international legal system calls acts that are coercive, but not uses of force (i.e. acts that would violate Article 2(4) of the United Nations Charter), countermeasures. The international community and individual states not longer issue letters of marque and reprisal. Instead, when states have their rights violated (or an ‘internationally wrongful act’ taken against them), they utilize arbitration or countermeasures to seek redress. For a state to take lawful countermeasures, however, requires that it determine the responsible state for the wrongful act in question. Yet cyber attacks, if we are to rely on what the professional cybersecurity experts tell us, are sophisticated in that they hide their identities and origins. Moreover, even if one finds out the origin of the attack, this may be insufficient to ground a state’s responsibility for the act. There is always the deniability that the state issued a command or hired a “cyber criminal gang.” Thus countermeasures against a state in this framework may be illegal.

What all this means is that if we do not want ignore current international law, or the teachings of history, we cannot condone private companies “hacking back.” The only way one could condone it is for the state to legalize it, and if this were the case, then it would be just like the state issuing letters of marque and reprisal. Yet by legalizing such a practice, it may open up those states to countermeasures by other states. Given that most of the Internet traffic goes through the United States (US), that means that many “attributable” attacks will look like they are coming from the US.   This in turn means that many states would then have reason to cyber attack the US, thereby increasing and not decreasing the likelihood of cyberwar.   Any proposal to condone retaliatory private enforcement in cyberspace should, therefore, be met with caution.

Monstermind or the Doomsday Machine? Autonomous Cyberwarfare

Today in Wired magazine, James Bamford published a seven-page story and interview with Edward Snowden. The interview is another unique look into the life and motivations of one of America’s most (in)famous whistleblowers; it is also another step in revealing the depth and technological capacity of the National Security Agency (NSA) to wage cyberwar. What is most disturbing about today’s revelations is not merely what it entails from a privacy perspective, which is certainly important, but from an international legal and moral perspective.   Snowden tells us that the NSA is utilizing a program called “Monstermind.” Monstermind automatically hunts “for the beginnings of a foreign cyberattack. [… And then] would automatically block it from entering the country – a “kill” in cyber terminology.” While this seems particularly useful, and morally and legally unproblematic, as it is a defensive asset, Monstermind adds another not so unproblematic capability: autonomously “firing back” at the attacker.

Snowden cites two problems with this new tactic. First, he claims that it would require access to “all [Internet] traffic flows” coming in and outside of the US. This means in turn that the NSA is “violating the Fourth Amendment, seizing private communications without a warrant, without probable cause or even a suspicion of wrongdoing. For everyone, all the time.” Second, he thinks it could accidentally start a war. More than this, it could accidentally start a war with an innocent third party because an attacking party could spoof the origin of the attack to make it look like another country is responsible. In cyber jargon, this is the “attribution problem” where one cannot with certainty attribute an attack to a particular party.

I however would like to raise another set of concerns in addition to Snowden’s: that the US is knowingly violating international humanitarian law (IHL) and acting against just war principles. First, through automated or autonomous responses, the US cannot by definition consider or uphold Article 52 of Additional Protocol I of the Geneva Conventions. It will violate Article 52 on at least two grounds. First, it will violate Article 52(2), which requires states to limit their attacks to military objectives. These include “those objects which by their nature, location, purpose or use make an effective contribution to military action and whose total or partial destruction, capture or neutralization, in the circumstances ruling at the time, offers a definite military advantage.” While one might object that the US has not ratified Additional Protocol I, it is still widely held as a customary rule. Even if one still holds this is not enough, we can still claim that autonomous cyber attacks violate US targeting doctrine (and thus Article 52(2)) because this doctrine requires that any military objective be created by a military commander and vetted by a Judge Advocate General, ensuring that targeting is compliant with (IHL). That a computer system strikes “back” without direction from a human being undermines the entire targeting process. Given that the defensive capacity to “kill” the attack is present, there seems no good reason to counter-strike without human oversight. Second, striking back at an ostensibly “guilty” network will more than likely have significant effect on civilian networks, property and functionality. This would violate the principle of distinction, laid down in Article 52(1).

If one still wanted to claim that the NSA is not a military unit, and any “strike back” cyber attack is not one taken under hostilities (thereby not being governed under IHL), then we would still require an entire theory (and body of law) of what constitutes a legitimate use of force in international law that does not violate the United Nations charter, particularly Article 2(4), which prohibits states from using or threatening to use force. One might object that a cyber attack that does not result in property damage or the loss of life is not subject to this prohibition. However, taking the view that such an attack does not rise to the level of an armed attack in international law (see for instance the Tallinn Manual), does not mean that such an attack is not a use of force, and thus still prohibited. Furthermore, defensive uses of force in international law are permissible only if they rise to the level of an armed attack (Article 51).

Second, autonomous cyber attacks cannot satisfy the just war principles of proportionality. The first proportionality principle has to do with ad bellum considerations of whether or not it is permissible to go to war. While we may view the “strike” as not engaging in war, or that it is a different kind of war, is another question for another day. Today, however, all we ought to consider is that a computer program automatically responds in some manner (which we do not know) to an attack (presumably preemptively). That response may trigger an additional response from the initial attacker – either automatically or not. (This is Snowden’s fear of accidental war.) Jus ad bellum proportionality requires a balancing of all the harms to be weighed against the benefits of engaging in hostilities. Yet, this program vitiates the very difficult considerations required. In fact, it removes the capacity for such deliberation.

The second proportionality principle that Monstermind violates is the in bello version. This version requires that one use the least amount of force necessary to achieve one’s goals. One wants to temper the violence used in the course of war, to minimize destruction, death and harm.   The issue with Monstermind is that prior to any identification of attack, and any “kill” of an incoming attack, someone has to create and set into motion the second step of “striking back.” However, it is very difficult, even in times of kinetic war, to proportionately respond to an attack. Is x amount of force enough? Is it too much? How can one preprogram a “strike back attack” to a situation that may or may not fit the proportionality envisioned by an NSA programmer at any given time? Can a programmer put herself into a position to envision how she would act at a given time to a particular threat (this is what Danks and Danks (2013) identify as the “future self-projection bias). Moreover, if this is a “one-size-fits-all” model of a “strike back” then that entails that it cannot by definition satisfy in bello proportionality because each situation will require a different type of response to ensure that one is using the minimal amount of force possible.

What all of this tells us, is that the NSA is engaging in cyberwar, autonomously, automatically and without our or our adversaries’ knowledge. In essence it has created not Monstermind, but the Doomsday Machine. It has created a machine that possesses an “automated and irrevocable decision making process which rules out human meddling” and thus “is terrifying, simple to understand, and completely credible and convincing” now that we know about it.

A Tale of Three Cyber Security Articles

Cyber security has been on the general security agenda for some time now, but it is only recently that Political Scientists have really engaged the topic in a serious manner befitting of the theoretical and empirical advances in the field.  In general, we have ceded this ground to those who either have a vested interest in the question (the cyber security industry) or to those who seek to inflate the threat based on imagined fears.  This blog will review some recent work in the field and evaluate the state of knowledge plus future directions.arguing duck

Continue reading

© 2017 Duck of Minerva

Theme by Anders NorenUp ↑