The idea of prediction in the study of international relations has been a persistent thought in my head for some time. Ostensibly, in our (mostly) non-experimental discipline, prediction represents the preeminent demonstration of a theory’s veracity. Of course, this perspective derives from simplistic conceptions of science as practiced in the natural sciences and as a consequence fit poorly with IR. Regressions struggle to develop models that ‘explain’ more than a small percentage of the variance in the dependent variable(s)—making prediction of outcomes nearly impossible. Our discipline defining structural theories also struggle to make more than vague predictions about systemic patterns—Waltz after all rejected the idea that structural realism is a theory of foreign policy, which would commit the theory to a much more exacting level of prediction. Nonetheless, despite the problems with prediction, my sense is that remains with us as an ideal.
In light of these musings, I read with some interest The Economist’s review of Philip Tetlock and Dan Gardner’s new book discussing the results of the Good Judgment Project, with a specific focus on so-called superforcasters*. These superforcasters were consistently able to outperform financial markets and trained intelligence analysts predicting real world outcomes. These superforcasters are, according to the review, exclusively foxes: people who think about the world in terms of a wide range of variables. This is in contrast to hedgehogs, who think about the world in terms of one of two big ideas—not unlike how many theories in IR conceive of the world (hello anarchy). These superforcasters do not build statistical models, although they do presume that most of the time things are ‘normal’ and that major deviations do not change the system (i.e. regression to the mean).
In line with their ‘fox’ analytical orientation, superforcasters “have a healthy appetite for information, a willingness to revisit their predictions in light of new data, and the ability to synthesise material from sources with very different outlooks on the world.” In other words, the superforcasters emphasize contingency and intellectual flexibility in their analysis. Even so, they are only able to make accurate predictions in the near term. I think these findings suggest important limitations to how IR generally treats the world it seeks to analyze. In particular, prediction might in large part go out the window. But if it does, what are the epistemological and methodological ramifications?
*Full disclosure: I participated in the Good Judgment Project and was asked at one point to participate as a superforcaster (I declined).
The epistemological ramifications are clear: we need to examine our ontological presuppositions, because this problem of prediction is a logical consequence of adopting a Humean view of causality, which leads to a fruitless search for ‘covering laws’.
Taking a tour through positivist philosophy of science shows how this works. By adopting the view that causation is something an observer projects onto conjunctions of sensory events, logical positivists defined scientific theory-building as an attempt to find and specify the stable form of these phenomenological relations. Entities like atoms are theoretical devices for organising phenomena into their constant correlations, rather than natural kinds.
If this is your view of causality, and by extension your view of theory, then prediction becomes the only way to determine whether the candidate Law of Nature is a valid theory. In other words, if you engage in nomological theory-building, then you test your theories by deducing their empirical implications and then checking to see if those implications obtain–‘we should expect to see [phenomenon/sense-event X] if [theory] is true’.
If we want to move away from prediction, or take seriously the possibility that prediction is in most cases completely impossible at least outside of the very near term, then we have just lost the ability to test positivist theories.
How likely is it that this conclusion will prove acceptable to a disciplinary mainstream professionally and cognitively build around positivist assumptions about what makes up the world and how we can know about it?
This is where I was going, but you did a much better job of staking out the claim.
Actually I was worried I was being a little too forceful.
Simon – how do you feel about more modest attempts to produce contingent generalizations (e.g. Pouliot, 2007)?
I’m not really sure the idea of a ‘contingent generalisation’ makes much sense. Contingency and generalisation are at odds. Typically the kind of historical sociology that people like Bourdieu (who informs Pouliot there) want to do is a form of realist process tracing, where you look at how the causal dispositions of real social structures have acted to shape the trajectory of a delimited case. The main kind of generalisation you see here–which is actually pretty controversial amongst scientific realists in the social sciences–is to treat some social structures as token instantiations of social kinds. For example, we might say that ‘the liberal-democratic state’ is a type of social identity, all instantiations of which possess the causal power to ϕ under conditions C. Some critical realists sign on to this, but other realists like Daniel Little don’t. Bourdieu fits into the latter camp. This is actually why I don’t fully agree with Pouliot’s suggestion that we should be inductive: a key manoeuvre in inferring the properties of specific social structures present in specific cases is transcendental inference, which is deductive.
‘For example, we might say that ‘the liberal-democratic state’ is a type of social identity’
Sorry, I mean ‘entity’ not ‘identity’. Fingers slipped.