I have yet to weigh in on the recent hack on the Office of Personnel Management (OPM). Mostly this is due to two reasons. First is the obvious one for an academic: it is summer! But the second, well, that is due to the fact that as most cyber events go, this one continues to unfold. When we learned of the OPM hack earlier this month, the initial figures were 4 million records. That is, 4 million present and former government employees’ personal records were compromised. This week, we’ve learned that it is more like 18 million. While some argue that this hack is not something to be worried about, others are less sanguine. The truth of the matter is, we really don’t know. Coming out on one side or the other is a bit premature. The hack could be state-sponsored, where the data is squirreled away in a foreign intelligence agency. Or it could be state-sponsored, but the data could be sold off to high bidders on the darknet. Right now, it is too early to tell.
What I would like to discuss, however, is what the OPM hack—and many recent others like the Anthem hack—show in relation to thinking about cybersecurity and cyber “deterrence.” Deterrence as any IR scholar knows is about getting one’s adversary to not undertake some action or behavior. It’s about keeping the status quo. When it comes to cyber-deterrence, though, we are left with serious questions about this simple concept. Foremost amongst them is: Deterrence from what? All hacking? Data theft? Infrastructure damage? Critical infrastructure damage? What is the status quo? The new cybersecurity strategy released by the DoD in April is of little help. It merely states that the DoD wants to deter states and non-state actors from conducting “cyberattacks against U.S. interests” (10). Yet this is pretty vague. What counts as a U.S. interest?
These questions are not mere academic navel-gazing; they are crucial to policy formation. Deterrence is necessarily relational; that is one must know who one is deterring, what their interests are and what they will deem an “unacceptable cost” to pay should they undertake the undesired course of action. Moreover, they must have a clear understanding of what will or will not trigger that unacceptable cost response. There can be no ambiguity about whether a state will or will not respond to a particular attack. If ambiguity is helpful at all, it is only in the severity of the costs imposed on the attacker. Even then, there must be some bar or threshold that the attacker thinks twice about crossing.
The signaling that the U.S. continues to undertake in regards to cybersecurity (and thus cyber-deterrence) is not that great. In fact, the only clear message that the U.S. sends out is that if a potential adversary attacks infrastructure and does serious physical or economic damage, then the U.S. will respond through any means necessary, and that might include sending “missiles down smokestacks”. Yet this does not actually do the U.S. any favors in the deterrence game. For it sets the threshold for serious and costly responses quite high, and anything that is below that threshold is a gamble from the adversary’s perspective. They may get quite a good payoff by using the try-it-and-see approach. For example, in the Anthem hack it is estimated that over 78 million people’s information was stolen, so how much is that worth?
There is a second problem with much of the discussion on cyber-deterrence too, and that is the focus on “infrastructure.” What we have learned is that many of the hacks, such as the OPM and Anthem hacks, it is not about damaging infrastructure but stealing data. The difference is subtle but important. I can come to your house and rob you, but leave your house standing. It might cost you something to change the locks, or install a security system, but I have not burned your house to the ground. The focus on infrastructure obfuscates what needs to be defended equally: data. While surely protecting data requires the best hardware and software (as well as good practices and cyber hygiene), this is a different target and requires a different type of mindset.
This brings me to my third point: the U.S. government will continue to suffer such attacks and will be hard pressed to be “resilient” to them because of the institutional and structural problems associated with big bureaucracy. For instance, the procurement cycles for new hardware and software are notoriously long, and finding the money to purchase these items is increasingly difficult. OPM, for instance, was already aware of its security vulnerabilities, had requested several years worth of funding totaling $94 million to upgrade their systems, but the time devoted to merely moving data was estimated at two years! The security threats change as rapidly as someone can write code. The long-cycle in the U.S. government means that it will inevitably face future hacks, and the systems it has in place will be woefully out of date.
Moreover, by ostensibly decoupling cyber capabilities for military and intelligence from all other government agencies and functions, the U.S. privileges certain government agencies (NSA/Cybercom/CIA/FBI) over others with regard to receiving adequate resources for cybersecurity. If the government, however, does not want to treat all agencies equally, and others suffer more attacks from “foreign adversaries” then, we may claim that the NSA/Cybercom has remit to protect these other agencies. Thus we do not need to furnish everyone with the best training, hardware and software. But if this is so we may quickly find ourselves in a bit of a perilous situation. In particular, that the NSA/Cybercom may claim that it requires access to everything at all times to make sure that foreign adversaries are not hacking US “interests.” Yet surely this is too far for those worried about privacy protection.
Ultimately the way we have framed the “cyber” problem is continually making it a problem. Deterrence is the wrong framework for cyberspace. Deterrence means that you know who your adversary is. You know what capabilities he has, so that you can defend against them and impose costs on him even after he attacks you. But to impose those costs, you must know what he values. You must know what will be too costly for him, and you must publicly and credibly threaten those interests.
Cyberspace is nothing like this, however. Often if we can identify an attacker, it is a best guess based on shadowy facts. Moreover, actors appear to be limiting their “damage” in such a way that overt and escalatory responses are not justified. Instead, we need to start rethinking how we approach this domain. We cannot look for old bottles and equally old wine.
I fully agree about the basic point that deterrence or balancing can’t really work as long as attacks are not attributable. This is somethign we have discussed a while back at the IR Blog: https://irblog.eu/no-balance-of-cyber-threats/
Your distinction between infrastructure and is very helpful! It seems that data being “copied/ stolen” is less alarming and more diffuse (from the defenders’ point of view) than infrastructure being “damaged”. One option might be to try change the discourse. We should say more clearly that each breach of security is equally damaging, because not knowing who has your private data is as bad as any other damage could be. (In your burglary example, victims are often traumatized by the break-in, whereas a bit of cash and electronics can be easily replaced.)
Another option, of course, would be to try to decrease the number of juicy targets there are in the first place, for instance by minimizing data collection and retention. (To link this to my favorite IT security topics: Who will be “liable” when a third party copies the NSA/CIA database on EU officials? Or metadata on all citizens’ web and phone usage?)