The Duck has a symposium which engages the European Journal of International Relation’s special issue on “The End of IR Theory.”  One of the pieces is the final version of the draft that John Mearsheimer and Stephen Walt circulated last winter.  If y’all remember, I had a few words to say about the paper.  In short, I was not a fan.  Now that the final version is out and they have a post at the Duck, I could take more swings at it, but I don’t think that is necessary as Dan Reiter has an excellent and much more succinct take on it.  Plus I said much of it last January.

Instead, other than a few closing comments, what I want to do here is simply present some figures and tables about the state of Grand Theory and of Hypothesis Testing.  Yes, I took the hypotheses that were implicit and/or explicit in the original piece and tested them with data from the TRIP project based at the College of William and Mary.  I presented a paper at a conference in May, and it will need significant revisions.  But it is the best I got right now as I don’t have the time to do the re-writes quite yet (sorry, Mike).  The paper is available here, and, again, the work is preliminary (but cite it anyway).

Before going ahead, I should make my prejudices clear.  My work has always been middle-range theory.  I have not asked how the world operates in general but: why do countries take the sides they do in other people’s ethnic conflicts? Why did some countries engage in irredentism after the Cold War while others did not? Why did countries vary in how they managed their efforts in Afghanistan (how do alliances operate in wartime)?  In each case, I built a theory (sometimes with co-authors who should get all of the blame) that relied heavily on existing theories of domestic politics to understand variations in foreign policy.  I tested my hypotheses with case studies most of the time and with quantitative analyses some of the time.  So, I am not a grand theorist, but not a mere hypothesis tester (damn few of those, really, as most work requires some theorization to get to testable hypotheses).  So, at first glance, I don’t have a dog in this fight.  But of course, I do.  I elaborate my biases elsewhere because they distract from the hypothesis testing I seek to present here.  But check out the link and then take what I say here with a giant grain of salt.

Bottom line up front: There are more IR outlets (journals) today than before, so the gains in non-paradigmatic work (not grand theory in my simple coding scheme) do not mean there is less grand theory, only that there is relatively less.

The number of articles that have non-paradigmatic work is increasing quite a bit in the past few years, but this is mostly driven by the existence of more outlets.

If the concern is with the decline of the Ism’s, then figure 1.2 should provide some solace:

Grand theory was never really that ascendant except for 1997 and 2000 or so.  There is basically the same amount of grand theory being published as before.  Non-grand stuff has climbed.  More work is required to see how much of this is counter-terrorism, counter-insurgency and other similar stuff.  Those folks who don’t mind absolute gains should be fine with this pictureThose concerned about relative gains?  Not so much.

M &W argue that it is not about methods, but given how much space they dedicate to blasting quantitative work, well, it might be worth checking out the trends:

The line for non-quantitative work is fairly smooth.  This needs to be un-packed, of course, but the folks who don’t regress or MLE are not going away.  Yes, there is more quantitative work than ever, but the numbers here show that we are at parity, not that quant has killed the non-quant.

One could wonder if this is not really about grand theory but about realism since M&W are, well, different kinds of realists:

The irony is that the man who argued that we would soon miss the cold war would not.  Realism peaked in the 1990’s and again after Iraq.  The trends seem to suggest that Realism’s popularity is returning to Cold War levels. Of course, this is probably all cyclical, but the point of this graph might be that M&W may be right in that their kind of Grand Theory is declining.

I did some regressions on this data as well, but I will not fill up this post with heaps of asterisks.  One of the fun findings, however, is that the two journals most associated with M&W, International Security and Security Studies, are (holding heaps of other stuff constant) strongly related with less grand theory (although more Realism).  I also did some stuff with how prevalent people perceived various approaches to be.  Everyone thinks that their own approach is more prevalent–that is realists are significantly correlated with surveys reporting that realism is more prevalent, liberals with liberals and so on.

Ok, so that’s number of articles in a particular category.  How do people self-identify?

The 2004 and 2011 surveys were not identical so we don’t have 2004 numbers for non-paradigmatic.  What we do find is that most folks who answered the survey in the US (I separated out the non-American numbers in the 2011 survey since the 2004 was only for Americans) do consider themselves belonging to an “ism” despite David Lake’s best efforts (sorry, David).  Liberalism took the biggest hit over the decade, there are fewer people who consider themselves Realists.  Who are the winners?  Non-paradigmatic folks and Constructivism.  Still, only 26% or so consider themselves non paradigmatic.

One of M&W’s complaints is that the professionalization of the profession means that quantity of output is the key, but if citations are king, then wouldn’t the most interesting, most dynamic, grandest of theoretical work be the best gamble?  After all, the folks who get cited the most are the ones who have done the grandest of theory, according to the folks M&W mention at the top of their piece?  Let’s check out citation patterns:

Grand paradigmatic stuff gets cited more, much more.  Only the really old non-paradigmatic stuff out-cites the grand theory stuff, and that is probably due to a few key articles from thirty four years ago having a heap of citations (remind me to check that out again).

Of course, M&W suggest the fetish for non-grand theory is recent so how about these:

The patterns are similar for the first several years, which might mean that grand theorists do not have an advantage or disadvantage when they go up for tenure.  But grand theorists’ work does tend to get more cited in the medium run.  Focusing on just the past several years still gives the edge to grand theory.  The quant work in the paper indicates that grand theory over the entire range is correlated with more citations but that if we truncate the sample to 2005-2011, then the correlation loses significance.  Quant work is associated with more cites in either time frame.

You can check out the paper for the tables and figures on how much of the Isms taught in classes at the undergrad and graduate levels.  The key findings there are that people teach what they do, so Realists tend to have more Realism, Liberals tend to have more Liberal stuff in their courses, and so on.

A couple of remaining thoughts:

  • One of my reactions to their piece is this: if the field really now has more “hypothesis testers” than “grand theorists”, is that so bad? That is, what kind of balance makes sense: 50-50, 70% grand theorists and a minority to do the hypothesis testing, or a small number of big thinkers and then lots of people testing the implications?  It just seems logical that the last would make the most sense….  However, as the figures and tables suggest, there is not an “imbalance in the force.”  And, if we remember correctly, when folks sought to create a balance in the force, they undermined their own hegemony, did they not? (If you cannot get the Star Wars prequels reference here, you need to pay more attention to Friday Nerd Blogging)
  • “this article is not a cri de coeur by two grumpy realists who are opposed to hypothesis testing in general and quantitative analysis in particular  (p.431).”  Um, sure.
  • M&W tend to argue that hypothesis testing is the easy path, taking less time and thought than grand theorizing.  I wonder if they ever had to code data, create a dataset, check it for errors and all that.  Because I can say from experience, it is damned hard and time consuming.  Of course, starting with someone else’s data makes things faster but not that easy.  I used a heap of Minorities at Risk data, and that required a lot of work to re-code.  Anyhow, oy.

Finally, they conclude that “We therefore favor a diverse intellectual community where different theories and research traditions coexist.”  If the TRIP data says anything, it says that we are indeed a diverse intellectual community where different theories and research traditions co-exist.  Which is why I had this pic at the front of my slides:



We are a big tent with all kinds of animals and acts playing in the many rings inside.  We tend to notice both those who are most like us and those that are most different.  Because the tent is getting bigger, we tend to notice the strange folks a bit more, but our kindred creatures are not going away.  There will always be room for the elephants and the lions.