In fall of 2014, former Defense Secretary Chuck Hagel announced his plan to maintain US superiority against rising powers (i.e. Russia and China). His claim was that the US cannot lose its technological edge – and thus superiority – against a modernizing Russia and a rapidly militarizing China. To ensure this edge, he called for the “third Offset Strategy.”
The previous two offset strategies were premised on Cold War rivalry and the belief that one can outspend the other through arms races. The first offset desired to gain dominance over Soviet Russia’s feared advantage in manpower and technological prowess. The US sought to deter the Soviets from engaging in a nuclear war. Aside from building up vast arrays of nuclear weapons, both at the tactical and strategic size, as well as creating and detonating thermonuclear weapons, and detonating nuclear weapons in space, both sides amazingly did not engage in direct hostilities. I do not know if the offset strategy had anything to do with this or not, but one could certainly argue that it increased the likelihood of a nuclear confrontation, given that we know during one of the most tense foreign policy situations in history – the Cuban Missile Crisis – four (yes four) nuclear weapons were detonated during the crisis. What is more, we also know that the US and the Soviets continually engaged in proxy wars.
The second offset sought ways to fight against a numerically superior force; the results were increases in sensor technologies, command and control technologies, stealth technology, precision-guided munitions, lasers and unmanned vehicles. In an attempt at both creating defensive capacities domestically, and generating new technologies to fight the war in Vietnam, the US amassed the ability to wage network centric warfare. While the capability was not fully realized until Gulf War I, the legwork for much of the technology development was carried out during the 1970s.
After Gulf War I, the US technological dominance was not missed by other powers. To counter the offset, these powers began rethinking ways to deny US maneuverability and access. It is this fear, of anti-access area-denial – that has given rise to the third offset. In Hagel’s estimation, the US requires a:
“New Long-Range Research and Development Planning Program that will help identify, develop, and field breakthroughs in the most cutting-edge technologies and systems – especially from the fields of robotics, autonomous systems, miniaturization, big data, and advanced manufacturing, including 3D printing.”
To lead the charge, Hagel appointed Undersecretary of Defense, Bob Work. Work, a former Colonel in the Marine Corps, is a tall and imposing man. He brooks no nonsense, and is forthright in his estimation that Russia and China pose the biggest threats to US security. Speaking this past November at the Halifax International Security Forum, Work stated “great power competition has returned.” He argued that the US must utilize a “strong but balanced” approach to these near competitors.
Aside from wondering how an “offset strategy,” premised on the notion that one is technologically superior can work with a “balanced” approach, Work’s plan to pursue the third offset through autonomous technologies, artificial intelligence, robotics and big data was a bit of a mystery until earlier this month when he gave a speech at the Center for New American Security (where he was briefly CEO). There he stated that the Defense Department is seeking $12 to 15 billion in its 2017 budget to advance these technologies and to “kick the crap out of people who grew up in the iWorld under an authoritarian reign.”
$15 billion. To get a handle on what this figure means, we should put it into perspective. The overall defense budget is somewhere on the order of $550 billion (excluding contingency and a black budget). Thus the third offset doesn’t really seem that large. However, compare this to NASA’s entire budget, which is somewhere in the $17 billion ballpark. Or think of it relationally to other state’s military budgets. Adjusted for inflation, Canada spends roughly $16 billion on its entire military force, and this was when conservative Stephen Harper was in power. Thus the US is getting ready to spend the same amount on autonomous weapons, artificial intelligence, robotics and other high tech weapons as Canada spends on all of its weapons, personnel, and logistics combined.
It is thus clear that such a dramatic increase in research and development for Offset3 will change the character of future war. But what of the assumptions of this strategy? To begin, one might want to wonder whether preparing for near-peer conflict is the best strategy going forward, given that we know that non-international armed conflict and hybrid war is the new normal. Preparing for a war that is unlikely to play out on a formalized decision tree is unhelpful at best, and how these near-peers might test this technology is open to question (given that many of these conflicts are operationally inappropriate for such weapons).
Another thought might be that relying on tactical approaches—due to the thinking that technology will solve the problem—will fail to address the need for strategic thinking. Technology is not “the” answer. Technology and technological development can only aid one in obtaining one’s predetermined goals. If one does not have a clear end point in mind, then one cannot know which technologies to develop. What is more, claiming that the US must “offset” a rising China and a modernizing Russia is not a strategy. It is a countermeasure.
Finally, the assumptions driving the DoD, and particularly Work, are still mired in a Cold War perspective. Work has mentioned the “AirLand Battle 2.0”, without really explaining how this is different than the Air Land Battle 1.0 (which can be found in US Army Field Manual 100-5 from 1976). If we are to think that the AirLand Battle 2.0 is qualitatively different than its Cold War predecessor, I’d like to know how. For instance, Work describes how warfare will be “hybrid” and “asymmetric” and that we need to learn how to fight and win in these environments, but then he falls back on a strategic mindset that is not strategic. AirLand Battle 1.0 was a doctrine about how to fight. It was an operational and tactical handbook. It was not a strategic perspective. As Romjue points out in 1984:
“FM 100-5 adds precision to earlier statements of the AirLand Battle concept. It is explicit about the intent of U.S. Army doctrine, and it conveys a vigorous offensive spirit. AirLand Battle doctrine “is based on securing or retaining the initiative and exercising it aggressively to defeat the enemy. . . . Army units will. . . . attack the enemy in depth with fire and maneuver and synchronize all efforts to attain the objective.” it also notes that “our operations must be rapid, unpredictable, violent, and disorienting to the enemy.’”
So where does this leave us in 2016 and beyond? First, it is beyond unhelpful to revisit the well of “offset strategies” and Cold War doctrine. As Khong rightly noted over 20 years ago, analogizing current foreign policy situations is not very helpful. Likewise thinking that what worked in 1959 or in 1972 will not work now.
Second, the types of technologies that the DoD is pursuing are qualitatively different than their predecessors. This is not to say that they are more or less dangerous. It is to say that what Work and his colleagues are after is the ability of nonhuman agents to make decisions on and in the battlespace. Autonomous technologies are not, as I’ve argued elsewhere, “smarter smart bombs.” In other words, they are not precision-guided munitions; rather they are “deciding munitions.” Where a human being decides a particular target and launches munitions to that target, an autonomous weapon or an artificial intelligence, would be the entity deciding. Whichever munitions it launches is beside the point.
In the end, it appears that the US fear of losing its technological edge to near-peer competitor nations is a self-fulfilling prophecy. That is, the more it pursues this particular offset, the greater the chances that all sides will simultaneously benefit from these advances. Creating stronger AI is not like building thermonuclear warheads. The materials are easy to come by, the knowledge is readily available, and the commercial sector is driving the way. Moreover, if we are to abide by the same doctrinal insights from forty plus years ago without questioning their applicability or appropriateness, then perhaps we should also embrace some of the other gems of the day, like building fallout shelters, smoking on air planes and shooting student protesters who question war.
Thanks for this. I’d been thinking along similar lines, though in a less informed way.
FWIW, I’ve been puzzled for quite a while at the push to develop weapons suitable for fighting the big powers. It seems understandable as a sort of back-up plan: however “small”
our current enemies may be, we need to be able to win against the big
ones should conditions change. But if we are not already up to that,
given our level of defense spending, something seems terribly wrong.
It seems even more puzzling when both (a) big powers aren’t the ones we’ve been fighting lately or seem likely to fight soon and (b) our actual fights have gone abysmally, especially from a strategic perspective: we can win each battle but lose the conflict as a whole. Shouldn’t we be pushing for solutions to the significant problems we’re actually experiencing?
More broadly, I’ve been puzzled at the push to develop weapons at all when our failures have not been primarily technological. To put it very cynically, Col. Work’s approach seems better suited to enriching defense contractors than for enhancing national security.
I agree with jeremiah. This looks like ” a emperor has no clothes” situation. All the billions of dollars will buy are some nice mockups, but no usable systems.
This looks like an interesting post. I read through it only quickly, but I plan to come back and read it more carefully. For now, though, a small note about your reference to Khong’s Analogies at War. “Analogizing current foreign policy situations is not very helpful” (your phrase) seems to mean, in the context of the post, “it’s not very helpful to look to the past for guidance.” That’s not precisely the message of Khong’s book, as I recall. A more accurate summary probably would be: the use of historical analogies when making foreign-policy decisions has to be approached with a lot of care and caution, because it’s easy, for psychological and other reasons, to use analogies poorly, and it’s difficult to use them well.
Also, the use of analogies is only one way in which the past can be ‘used’ by policymakers. Other ‘uses’ of the past might be, in some cases, less dicey or difficult. Or perhaps not — I think this is all very dependent on context: what purported ‘lessons’ are being cited or mobilized for what particular purposes. But to suppose that it’s possible for policymakers to approach any issue completely fresh, as an entirely unique question, seems unwarranted — dare one say unrealistic? — as a matter of psychology, if nothing else.