When the Soviets launched Sputnik in 1957, the US was taken off guard. Seriously off guard. While Eisenhower didn’t think the pointy satellite was a major strategic threat, the public perception was that it was. The Soviets could launch rockets into space, and if they could do that, they could easily launch nuclear missiles at the US. So, aside from a damaged US ego about losing the “space race,” the strategic landscape shifted quickly and the “missile gap” fear was born.
The US’s “strategic surprise” and the subsequent public backlash caused the US to embark on a variety of science and technology ventures to ensure that it would never face such surprise again. One new agency, the Advanced Research Projects Agency (ARPA), was tasked with generating strategic surprise – and guarding against it. While ARPA changed into DARPA (Defense Advanced Projects Agency) in the 1970s, its mission did not change.
DARPA has been, and still is, the main source of major technological advancement for US defense, and we would do well to remember its primary mission: to prevent strategic surprise. Why one might ask is this important to the students of international affairs? Because technology has always been one of the major variables (sometimes ignored) that affects relations between international players. Who has what, what their capabilities are, whether they can translate those capacities into power, if they can reduce uncertainty and the “fog and friction” of war, whether they can predict future events, if they can understand their adversaries, and on and on the questions go. But at base, we utilize science and technology to pursue our national interests and answer these questions.
I recently brought attention to the DoD’s new “Third Offset Strategy” in my last post. This strategy, I explained, is based on the assumption that scientific achievement and the creation of new weapons and systems will allow the US to maintain superiority and never fall victim to strategic surprise (again). Like the first and second offsets, the third wants to leverage advancements in physics, computer science, robotics, artificial intelligence and electrical and mechanical engineering to “kick the crap” out of any potential adversary.
Yet, aside from noting these requirements, what exactly, would the US need to do to “offset” the threats from Russia, China, various actors in the Middle East, terrorists (at home and abroad), and any unforeseen or “unknown unknowns?” I think I have a general idea, and if I am at all or even partially correct, we need to have a public discussion about this now.
As Deputy Work notes, the new offset strategy is shortening time horizons to deploying technologies in five years – 2021. The DoD will “look” for long term projects and “plant seeds” for longer horizons, but they are looking to change the face of technology and superiority by the time new entering freshmen are on the job market.
This increase in R & D and D (deployment) tempo means that we need to talk about the promise or peril of these technologies and what the blowback may be right now.
So what might the end state of this offset look like? Well, one, I don’t think there is an end-state. Even Work notes that his view is “never-ending,” and as such it means that to maintain superiority, the US requires the ability to know everything, be everywhere and be constantly pushing the boundaries. Strategic studies or military affairs scholars will see this as “information domination” and increased “situational awareness,” but the difference is that these concepts will not be limited to a particular time and battlespace.
Rather, the rise of “gray wars,” or international policing, or international counterterrorism, requires a God’s eye view of everyone at all times. To do this, the US would require vast networks of sensors. Through advanced computing and artificial intelligence (AI), the DoD – or anyone else – could monitor, track and target individuals at any time any place. Wearables, or those handy little devices that tell you that you need to stand up and walk around a bit more, or “smart homes” that let you change your home’s temperature from your phone, or even perhaps “smarter homes” that will take care of your aging parents by recognizing that they are acting out of routine, will provide these types of sensor inputs. Beyond the GPS data on your phone, or CCTV networks in cities, the data that we create and shed will weave all the details needed.
One might think that the mere behavior of an individual cannot, however, signal intent. And it is really the intent that governments and defense agencies need to know so as to counter any potential surprise attack. Intent, however, may be readily deducible through the monitoring of vast amount of online communication, or picked up through the microphones of your child’s favorite new toy, or the cool new TV that has voice recognition. Depending upon “who owns” that data, one may not have the right to privacy in one’s own home, and if one is not inside the US, then one has no right to privacy against US intelligence operations.
But let’s say, for the sake of argument, that we go beyond the “information domination” aspect to “situational awareness.” What would that require to offset the rise of all these lions and tigers and bears? Ubiquitous unmanned systems in the air, land, sea, space and cyber domains, as well as these systems connected to one another in heterogeneous packs that can act together for collaborative operations, then reconfigure at a later date for different missions. They will probably need to be able to transmit significant amounts of data to various processing locations, where the data will be analyzed, fed into war gaming simulations, and funneled through into a few courses of action for a commander somewhere to review. Maybe, the the AIs will be robust enough to suggest operational concepts to go along with these courses of action, thus eliminating the need for operational commanders, as well as tactical ones (as the teams of unmanned systems will be able to work collaboratively together with little or no supervision).
No no, you say. This is too science fiction. Militaries would not want to do this or to minimize the role of human commanders or warfighters like this. But yet, they do. No, you say, the military would never hand over authority or delegate tasks like this to computers and robotic platforms. Yet we know that the speed of operational tempo will require it, and that the defense of such vast networks will require AIs to patrol them and respond in picoseconds.
But, you say, this can never happen in five years. And here, you may be right. Work’s estimation of advancing our capacities to this extent in five years is hopeful, but not inconceivable. All of the technologies that I listed here already exist. The challenge is not inventing them, but in making them secure, reliable, verifiable and validated.
What then do we have to look forward to from an IR standpoint? Our theories of bargaining, war duration, escalation, diplomacy, and even economics, (in short DIME), are going to face the same types of challenges that modern militaries are facing when it comes to hierarchical command and control. In other words, we might need to rethink how we fight and the assumptions we bring to the table about fighting or pursuing national interests through power. If you think I might be wrong, just think about how cyber deterrence doesn’t exist, and how hard it might be to think about theoretical and empirical research on systems that you know nothing about or race ahead of you before you’ve tried to fit them into an ill fitting theoretical construct. We all love Schelling, but now the arms have their own preferences and influence.
0 Comments
Trackbacks/Pingbacks