Since this is my first post, I want to thank Charli, Dan, and everyone else for giving me this opportunity to, well, spout off about whatever I want. They may regret it. To start, as someone of you might remember from a guest post that I did over the summer, one of the projects I am currently working on involves looking at the best ways to accurately forecast international political events such as whether Iran will test a nuclear weapon before 1 January 2014 or who will win the upcoming election in Sierra Leone.
In the wake of the stunningly accurate election models designed by many over the last several months (for a roundup on those that nailed it – and those that didn’t – go here), Jay Ulfelder just wrote a fascinating piece for ForeignPolicy.com titled “Why The World Can’t Have A Nate Silver.” I want to highlight it first and foremost because it is a good piece of work. In it, Jay outlines why it is so difficult to forecast international political events: we often lack excellent, or even above average, or even minimally acceptable data on the necessary “plausible predictors” (Jay’s phrase) to accurately forecast important events such as revolutions or wars. For those interested in Jay’s writing in general, he runs a great blog called Dart-Throwing Chimp.
The U.S. Intelligence Advanced Research Projects Activity is currently funding a forecasting tournament designed to reach the forecasting frontier – or do the best we can despite all of the real limitations that Jay identifies. My work on forecasting is part of this effort. I’m part of the Good Judgment Team run by Phil Tetlock and Barbara Mellers. We are using methods such as the wisdom of crowds, prediction markets, and teams, along with some forecasting algorithms, to try and build the best mousetrap possible. You can read a brief description of what we are doing, and some suggestions for how that might influence activities like the Global Trends project, here.
One of the things we need to make the project run, however, is forecasters. In the coming days, in addition to beginning to post more frequently on this and other topics, I will be putting out a call for forecasters. We are looking to recruit a new wave of people willing to (anonymously) put forward their best guesses on a litany of potential developments around the world. No professional expertise required.
In the meantime, Â happy Friday everyone and you can follow me on twitter @mchorowitz.
Mike, welcome aboard. While I’m deeply curious to see what the outcomes of these forecasting tournaments are, I have to admit that I am more interested as a committed science-fiction fan than as a practicing social scientist. As expressive works of art aggregated through some relatively straightforward math, I think the whole exercise is a fascinating way of shedding light on on our most unreflectively-held assumptions about the world, so the predictive accuracy seems to me to be kind of a red herring in the mix — H. G. Wells got a lot of things wrong, got a lot of things right, and we miss the plot if we focus on that minor question instead of on the broader question of what kinds of assumptions he was operating with, how they relate to broader societal values, and how well he gamed them out (to say nothing of whether he successfully imagined compelling characters…signs point to no, in my book).
The thing that Jay overlooks in the piece you reference, I think, is the extent to which electoral forecasting works (when it works) not just because we have data, but because the game has relatively stabilized and well-defined rules, and because the data that we have in tracking polls basically approximates a repeated experimental trial of the election. In that way, democratic elections with tracking polls are like baseball: not only a sufficiently large n, but a sufficiently structured/organized system of interaction within which the various “n”s are approximately similar by design. I would disagree with Jay that the difference is that we have “direct measures of interests and intentions” in electoral politics and polling; I think that the difference is that we have virtual elections over and over under the same or similar rules. Which we don’t have globally, so I would count myself even more pessimistic about global forecasting of events: the problem is not data, but organizational structure.
PTJ – thanks for the warm welcome. Much appreciated. Those are great points. That’s one of the reasons why IARPA wanted to fund this. A sense that the traditional methods were just not good enough. We’ll see if we can do better! More on that later.
I think you are right though that consistent “rules of the game” are critical here because, if you buy the accuracy of the polling, they are a repeated “experimental trial” of the election. That’s a really nice point.