Michael Horowitz and Philip Tetlock have an interesting piece in Foreign Policy that examines the record on long-range forecasting of global events — 15 – 20 years into the future. They acknowledge the inherent difficulties of such a projections, but still wonder:
whether there are not ways of doing a better job — of assigning more explicit, testable, and accurate probabilities to possible futures. Improving batting averages by even small margins means the difference between runner-ups and World Series winners — and improving the accuracy of probability judgments by small margins could significantly contribute to U.S. national security.
Overall, I like the piece, but I do wonder about a couple of the basic premises and their prescription.
1. Would improving the accuracy of probability judgments actually enhance US national security? I’m not convinced. And, unfortunately, Horowitz and Tetlock don’t unpack this claim. They do acknowledge, and I agree, that improving accuracy would be difficult and it would only be improvements on the margins. The world is getting more complex, not less. It is more dynamic, not less. New and more actors in the international system interacting with greater frequency, more intensity, and faster speeds means that there is a constantly changing strategic environment in which actors act and react — and continue to change the strategic environment. In short, minor improvements in accuracy just might do anything because on whole everything is getting more complex.
2. Is accuracy the right metric? Even if we did have a better understanding (or thought we did) of the future, any policy calibrations made today on the basis of what that future might look like, could alter the future in ways that deviate from the accuracy of the long-range forecasting. In this sense, accuracy may well be the wrong metric.
3. Is there a downside in trying to get better? Maybe. Horowitz and Tetlock conclude:
Even if we were 80 percent or 90 percent confident that there is no room for improvement — and the Global Trends reports are doing as good a job as humanly and technically possible at this juncture in history — we would still recommend that the NIC conduct our proposed experiments. When one works within a government that routinely makes multibillion-dollar decisions that often affect hundreds of millions of lives, one does not have to improve the accuracy of probability judgments by much to justify a multimillion-dollar investment in improving accuracy.
Again, I think there is utility in long-range forecasting exercises, I’m just not sure I see any real benefits from improved accuracy on the margins. There may actually be some downsides. First, a “multi-million dollar investment” (they don’t tell us exactly how much) is still money and it may be a waste time and money to throw even more resources at an effort that is principally of interest only to the participants. Do policymakers really get much from projects like Global Trends or other long-range forecasts — and would they get added benefits from marginal improvements in accuracy? They already have their own biases and perceptions of the future — do these exercises have any real influence?
Second, what if we spend more time, money, and other resources to enhance those capabilities such that it alters decision-makers’ perceptions and gives them an unfounded sense of accuracy, i.e, that they come to see long-range forecasting as producing accurate or realistic futures? We may get a whole host of policy reactions that are unnecessary, wrong, and counterproductive based on what are still probabilistic outcomes.
I’m not saying we shouldn’t tweak these exercises to make them better for all involved. I also agree with Horowitz and Tetlock that there is utility in conducting these long-range forecasting efforts. It is helpful to enlist a broad set of academic and government views to assess current and long-term trends. My own sense is that these efforts probably tell us more about the present than they do about the future. They force analysts to articulate their often embedded assumptions and to project into the future the likely consequences of their current assessments. I think we should keep them, I’m just not sure we need to spend too much more time and money on them. Of course, I might be wrong.
0 Comments