Prediction: For whom the bell tolls?

12 October 2015, 1148 EDT

The idea of prediction in the study of international relations has been a persistent thought in my head for some time. Ostensibly, in our (mostly) non-experimental discipline, prediction represents the preeminent demonstration of a theory’s veracity. Of course, this perspective derives from simplistic conceptions of science as practiced in the natural sciences and as a consequence fit poorly with IR. Regressions struggle to develop models that ‘explain’ more than a small percentage of the variance in the dependent variable(s)—making prediction of outcomes nearly impossible. Our discipline defining structural theories also struggle to make more than vague predictions about systemic patterns—Waltz after all rejected the idea that structural realism is a theory of foreign policy, which would commit the theory to a much more exacting level of prediction. Nonetheless, despite the problems with prediction, my sense is that remains with us as an ideal.

In light of these musings, I read with some interest The Economist’s review of Philip Tetlock and Dan Gardner’s new book discussing the results of the Good Judgment Project, with a specific focus on so-called superforcasters*. These superforcasters were consistently able to outperform financial markets and trained intelligence analysts predicting real world outcomes. These superforcasters are, according to the review, exclusively foxes: people who think about the world in terms of a wide range of variables. This is in contrast to hedgehogs, who think about the world in terms of one of two big ideas—not unlike how many theories in IR conceive of the world (hello anarchy). These superforcasters do not build statistical models, although they do presume that most of the time things are ‘normal’ and that major deviations do not change the system (i.e. regression to the mean).

In line with their ‘fox’ analytical orientation, superforcasters “have a healthy appetite for information, a willingness to revisit their predictions in light of new data, and the ability to synthesise material from sources with very different outlooks on the world.” In other words, the superforcasters emphasize contingency and intellectual flexibility in their analysis. Even so, they are only able to make accurate predictions in the near term. I think these findings suggest important limitations to how IR generally treats the world it seeks to analyze. In particular, prediction might in large part go out the window. But if it does, what are the epistemological and methodological ramifications?

 

*Full disclosure: I participated in the Good Judgment Project and was asked at one point to participate as a superforcaster (I declined).