The common understanding in military circles is that the more data one has, the more information one possess. More information leads to better intelligence, and better intelligence produces greater situational awareness. Sun Tzu rightly understood this cycle two millennia ago: “Intelligence is the essence in warfare—it is what the armies depend upon in their every move.” Of course, for him, intelligence could only come from people, not from various types of sensor data, such as radar signatures or ship’s pings.
Pursuing the data-information-intelligence chain is the intuition behind the newly espoused “Kill Web” concept. Unfortunately, however, there is scant discussion about what the Kill Web actually is or entails. We have glimpses of the technologies that will comprise it, such as integrating sensors and weapons systems, but we do not know how it will function or the scope of its vulnerabilities.
As if almost straight out of Sun Tzu, Navy Captain Tom Druggan explains “We need more data providers. The target space, if we ever go to combat or war, will be rich. We need sensors that can track them all, but all sensors are not created equal, so we need sensors that can provide fire control quality data.” In layman’s terms, “data” that enables rapid lethal response.
But is this vision actually going to work? Or, is it wishful thinking in a bid to meet and defeat anti-access and area denial and check the rising powers of Russia and China? I worry that the Kill Web will actually yield the opposite of its intended function: it will frustrate warfighters by decreasing situational awareness for humans and remove the notion of “intelligence” from the battlespace. In short, because of technical and human limitations, we are going to encounter greater incentives to fully automate decision making and the use of force in war.
But let’s look at the first intuition about the data-information-intelligence chain. There is no guarantee that increasing the amount or kind of sensors in a battlespace will yield useful information. Data is just an amalgamation of discrete and observed facts or inputs. It lacks any meaning. Only when one this data is given meaning—that is something useful—can we call it information. Sensor input does not necessary lead to useful information, as it could merely be noise. This lack of information, therefore, leads to less understanding in the battlespace, and it may in turn lead to a misunderstandings or inappropriate decisions on when or where to fire.
Typically, when sensors receive data inputs someone or something needs to make sense of it (make it into information), but this process is time consuming. The simple fact of moving sensor data over data link communications is subject to lags and delays. Moreover, whatever information is generated needs to be pushed to a commander, and then she needs to decide on the requisite course of action and distribute those orders. (This is, in essence the former notion of the “kill chain.” A linear model that starts with target identification, moves to force dispatch, then to decision and order to attack, and finally destruction of the target.) But for the Kill Web, there is uncertainty how data, information processing, decision making and lethal force will work together (if the Kill Web is any different than its predecessor).
Where it might be different is in the Web’s nonlinear, dynamic, complex and distributed vision. Here, netted sensors form an invisible web alerting warfighters to possible threats or adversary movements. This vast array of sensors provides massive amounts of data, because each sensor node could be anything from a platform (like a ship, plane or ground vehicle) to an unattended sensor on the ground or floating in the sea. For this Web to work, all warfighters need to see the same picture in real time. That is, they need actionable and useful information to give them situational awareness (especially if the warfighter is not actually anywhere near the battlespace).
The trouble, however, is that sharing this information is subject to communications limitations and time delays. Such delays would hamper militaries’ abilities to keep the pace or tempo of war fast, and so hinder their “operational tempo.” The faster the operational tempo, the harder it is for an adversary to make good decisions, and thus whoever can maintain a faster pace with good information and accuracy, wins. One way around the time-lag vulnerability would be to store or process data in the same location as the sensor.
But “smart” sensors will probably not suffice to fully remedy this problem. Depending upon the algorithm on the sensor, relevant data may not be pushed out to decision makers. More tricky is when an adversary knows about such sensors and tries to trick them with countermeasures, such as spoofed data inputs. But AI and decoys aside, there is still a very difficult problem of “sensor fusion” once all the sensors are even working correctly (just ask the folks working on the F-35).
Add onto this a battlespace that is probably jammed, noisy, cluttered and chaotic, as well as slow in transmission speeds, and we begin to see how the very decoupling of the sensor from the “shooter” becomes an increased vulnerability. This is the “sensor-shooter-loop.” The shorter the distance (either metaphorically or literally) between the sensor and the shooter, the faster the operational tempo. Yet, if the future battlespace is nonlinear, dynamic and complex, and sensor data flows through it at varying speeds from distributed sensors, there is actually an incentive to take the human out of the loop. Why?
First, we know that future warfighting will see increased reliance and proliferation of more unmanned systems. There will be fewer humans in the battlespace. With more sensors and fewer humans, we risk a big data “tipping point” where humans just can’t see the whole picture and are confined to the limited picture in front of them (if at all). In these cases, there are incentives to utilize machine learning and artificial intelligence to process and collate information into “useful” packages for humans. In these instances, humans will rely on advanced algorithms to tell them the correct targets. The human in this instance is not doing the target identification, but is merely a waypoint in the sensor-shooter-loop.
Second, if a military faces a contested or “denied” communications environment, then humans do not merely face a slow network, or a potentially even slower human commander’s response, but a lack of communications. In these instances, the incentive is to permit integrate the sensor-and-shooter together and give the authorization to shoot to the platform. In these instances, we would face autonomous weapons systems.
Even if one thinks this is too far afield, as it denies the Kill Web’s integrated concept, one cannot deny the incentive structure that the Kill Web creates. When a military wants to “suck in all this information” to then engage more quickly than presently possible, the weakest link tends to be waiting for a human.
Despite thinking that providing “tactical clouds” that host all this sensor data (or processed information) on some platform at the “forward edge” of operations will keep humans engaged, force planners will quickly realize that significant problems associated with situational awareness will remain. A distributed battlespace, unmanned systems, and big data saturation will mean that the tactical cloud is only a Band-Aid on a rapidly bleeding wound. What is more, such tactical clouds on planes or ships would be a prime target to an adversary force. To ensure redundancy, the militaries would require more of these hosts, but more of them only increases the communications problems.
Ultimately, the result of the Kill Web will be decreases in situational awareness, potentially decreased communications, and a “sensor rich” environment that is producing nothing but noise. The Kill Web starts to trap not only its own spiders, but potentially a host of others—none of which may be its prey. Defense and government officials must look hard at the limitations of the tech as well as their promise.
There’s a lot going on here, and it’s only roughly captured by any of the words that anyone is using. Then again, one can argue that it’s nothing new. “Kill web” and “distributed lethality” are aspirational slogans that point in the direction the Navy thinks it needs to go, not that information sharing at multiple scales and a sensor radioing a shooter where to find a target (or even the aiming coordinates) is a revolutionary idea that no one ever thought of before.
What is qualitatively different is the level of automation that will be accepted to cope with the speed of battle and intensity of data. Loss of human control becomes a very real threat. You do not have to put the sensor, analyst and shooter into one AI-driven package nor do you entirely have to take all the people out before you have effectively autonomous weapon systems with all the risks.
The current rhetorical emphasis on human-machine teaming and we-won’t-take-the-human-out-of-the-loop (oh, unless our adversaries do, which, yeah, they probably will, so we’ll be getting ready for that) is just a thin veil of ideological cover that just about everyone who is actually involved with this stuff sees right through. We’re going killer robot just as fast as we can.