Who needs experts to forecast international politics?

25 June 2012, 1230 EDT

 This is a guest post by Michael C. Horowitz, Associate Professor of Political Science at the University of Pennsylvania.

Who can see the future? For us mere mortals, it’s hard, even for so-called experts. There are so many cognitive biases to take into consideration and even knowing your own weaknesses often does not help. Neither does being smart, apparently. So, what does make for “good judgment” when it comes to forecasting? When, if ever, do experts have advantages in making predictions? And how can we combine expertise and statistical models to produce the best possible predictions? This is not just an academic question, but one relevant for policy makers as well, as Frank Gavin and Jim Steinberg recently pointed out. There are new efforts afoot to try and determine the boundary conditions in which experts- both political scientists and otherwise- can outperform methods such as the wisdom of crowds, prediction markets, and groups of educated readers of the New York Times. At the bottom of this post is information on how to assist in this research. I hope you will consider doing so.
Jacqueline Stevens recently argued in the New York Times that “Political Scientists are Lousy Forecasters.” In her article, which othershavealreadydissected, she discusses Phil Tetlock’s work on expert forecasting. His book, Expert Political Judgment, has become the definitive work on the subject. The postage stamp version she cites is that experts are only slightly better than dart-throwing chimps at predicting the future, if they are better at all.
However, the notion that Tetlock argues that experts are know-nothings when it comes to forecasting is simply wrong, as others have already pointed out. More important, Expert Political Judgment was a first foray into the uncharted domain of building better forecasting models. Several years later, Tetlock is back at it, and this time he has invited me, Richard Herrmann of Ohio State University, and others to join him. The immediate goal this time is to participate in a forecasting “tournament” sponsored by the United States intelligence community. The intelligence community has funded several teams to go out and build the best models possible – however they can – to forecast world events. Each team has to forecast the same events, a list of questions given to the teams by the sponsor, and then submit predictions [note: Tetlock’s team dominated the opposition in year one – so we’ll find out this year whether adding me helps or not. Unfortunately, there’s no place to go but down].

Our team is called the Good Judgment team, and the idea is to not only win the tournament, but also to develop a better understanding of the methods and strategies that lead to better forecasting of political events. There are many facets to this project, but the one I want to focus on today is our effort to figure out when experts such as political scientists might have advantages over the educated reader of the New York Times when it comes to forecasting world events.
One of the main things we are interested in determining is the situations in which experts provide knowledge-added value when it comes to making predictions about the world. Evidence from the first year of the project (year 2 started on Monday, June 18) suggests that, contrary to Stevens’ argument, experts might actually have something useful to say after all. For example, we have some initial evidence on a small number of questions from year 1 suggesting that experts are better at updating faster than educated members of the general public – they are better at determining the full implications of changes in events on the ground and updating their beliefs in response to those events.
Over the course of the year, we will be exploring several topics of interest to the readers – and hopefully authors – of this blog. First, do experts potentially have advantages when it comes to making predictions that are based on process? In other words, does knowing when the next NATO Summit is occurring help you make a more accurate prediction about whether Macedonia will gain entry by 1 April 2013 (one of our open questions at the moment)? Alternatively, could it be that the advantage of experts is that they have a better understanding of world events when a question is asked, but then that advantage fades over time as the educated reader of the New York Times updates in response to world events?
Second, when you inform experts of the predictions derived from prediction markets, the wisdom of groups, or teams of forecasters working together, are they able to use this information to yield more accurate predictions than the markets, the crowd, or teams, or do they make it worse? In theory, we would expect experts to be able to assimilate that information and use it to more accurately determine what will happen in the world. Or, maybe we would expect an expert to be able to recognize when the non-experts are wrong and outperform them. In reality, will this just demonstrate the experts are stubborn – but not in a good way?
Finally, are there types of questions where experts are more or less able to make accurate predictions? Might experts outperform other methods when it comes to election forecasting in Venezuela or the fate of the Eurozone, but prove less capable when it comes to issues involving the use of military force?
We hope to explore these and other issues over the course of the year and think this will raise many questions relevant for this blog. We will report back on how it is going. In the meantime, we need experts who are willing to participate. The workload will be light – promise. If you are interested in participating, expert or not, please contact me at horom (at) sas (dot) upenn (dot) edu and let’s see what you can do.