Professionalization and the Poverty of IR Theory

27 March 2012, 0546 EDT

[I wrote the bulk of this post very late at night while suffering a bout of insomnia. In the end, I ran out of energy and called it quits. Thus, I’ve edited the post for the purpose of clarity and style. Major content updates are in blue text (bad idea, now abandoned].

[2nd Update: I called this post the poverty of IR Theory not the poverty of IR. There’s a difference.]

PM’s post on getting into political-science PhD programs continues to provoke spirited debate. Of particular note is reaction to his claims (echoing Dan Drezner) about the importance of mathematical and statistical skills. As “Evanr” writes:

It sounds like you think these people emerge ‘ex nihilo’ as scholars at the top of the field. At one point and time they were ‘new graduate’ students too, and were very much made the way they are by virtue of their training. I don’t think Wendt would have produced the scholarship he did without the non-mathematical influence of Raymond Duvall [nb: Bud started out his career doing statistical work; Alex was also trained by David Sylvan, whose work extend to agent-based modeling]. Is it not worthwhile to look at the training of top scholars to see how we should shape current students? 

There may be a vast consensus – I’m not sure if there is – that specific forms of training are indispensable to graduate students, but this consensus may be wrong. It sounds like your recommendations are more about reproducing the orthodoxies of the field to make oneself a marketable candidate than they are to intended to produce thoughtful, innovative scholarship. In the short term this may give you an edge in entering the field, but it may also make for lackluster career advancement. With the exception of certain ‘citation cartels’, thinking like everyone else is not a great way to get published.

Having spent far too many years on my Department’s admissions committee–which I currently chair–I have to agree with part of PM’s response: it is simply now a fact of life that prior mathematical and statistical trainings improves one’s chances of getting into most of the first- and second-tier IR programs in the United States. But that, as PM also notes, begs the “should it be this way?” question.

My sense is that over-professionalization of graduate students is an enormous threat to the vibrancy and innovativeness of International Relations (IR). I am far from alone in this assessment. But I think the structural pressures for over-professionalization are awfully powerful; in conjunction with the triumph of behavioralism (or what PTJ reconstructs as neo-positivism), this means that “theory testing” via large-n regression analysis will only grow in dominance over time. I’d also caution some of my smug European and Canadian friends that the writing is on the wall for them as well… albeit currently in very small print.

I should be very clear about the argument I develop below. I am not claiming that neopositivist work is “bad” or making substantive claims about the merits of statistical work. I do believe that general-linear-reality (GLR) approaches — both qualitative and quantitative — are overused at the expense of non-GLR frameworks–again, both qualitative and quantitative. I am also concerned with the general devaluation of singular-causal analysis.

Indeed, one of my “problems” in IR is that I am probably too catholic for my own good, and thus don’t have a home in any particular camp. My views are heavily inflected by my time with high-school and college debate, which led me to a quasi-perspectivist view of theoretical explanation: different kinds of work involve different wagers about what “counts” as instruments, knowledge, and results. Different kinds of work can and should be engaged in a debate about how these wagers, but that doesn’t mean we shouldn’t also evaluate work on its own terms. Thus, I get excited about a wide — probably too wide — variety of scholarship.*

What I am claiming is this: that the conjunction of over-professionalization, GLR-style statistical work, and environmental factors is diminishing the overall quality of theorization, circumscribing the audience for good theoretical work, and otherwise working in the direction of impoverishing IR theory. As is typical of me, I advance this claim in a way designed to be provocative.

1. Darwinian Pressure

We currently produce more PhDs than there are available jobs in IR. We produce far more PhDs than exist slots at the better-paying, research-oriented, well-located universities and liberal arts colleges in the United States. Given this fact of life, consider the following two strategies:

  1. Challenge your professors; adopt non-standard research designs; generally make trouble. 
  2. Focus on finding out the template for getting the best job; affirm your professors; adopt safe research designs.

The first strategy can work out; it sometimes works quite well. But it more often fails spectacularly. Given some of the trends in the field (reinforced themselves by over-professionalization — we face a series of feedback loops here), the first strategy is much riskier than it was fifteen years ago. PhD students may be a variety of dysfunctional things, but stupid generally isn’t one of them. It isn’t much of a surprise, then, that an ever-growing number of them choose the second pathway.

2. Large-N Behavioralism Triumphant

Consider, for a moment, the number of critical, post-structuralist, feminist, or even mainstream-constructivist scholars who hold tenure at A-list and near A-list IR programs in the United States. How many of these programs have more than one tenured professor doing these kinds of work? Still thinking, I bet.

How many of them have multiple tenure-track professors working in this idiom? I can think of a few, including Cornell, George Washington, Ohio State, and Minnesota. But that’s not a lot.

How many have multiple tenure-track professors doing quantitative work, particularly in open-economy IPE (PDF) and JCR-style international security? There’s no point in enumerating them, as virtually every program fits this description.

How many exclusively qualitative scholars–critical, neopositive, or whatever–have gotten jobs at A-list and near A-list schools in the last five years? Not many.

Now recall my stipulation that most PhD students in IR aren’t stupid; most figure out pretty quickly that failure to develop strong quant-fu immediately (1) precludes one from getting a significant number of jobs but (2) closes the door on very few job opportunities. After all, very few members of search committees will say “well, that applicant’s dissertation involves a multivariate regression, I think multivariate regressions aren’t proper social science, so I’m going to block him.” But, I’m sad to say, many members of search committees will refuse to seriously entertain hiring someone who doesn’t use lots of numbers–unless some sort of logroll is underway.

Now add the fact of exponentially increasing computing power. Combine that with (1) nifty statistics packages that do a lot of the work for you; (2) data sets that, although often junk, are widely accepted as “what everyone uses”; and (3) the “free pass” we too often give to using inappropriate-but-currently-sexy statistical techniques. What we’ve got is a recipe for monoculture and for the wrong kind of innovation in statistical methods, i.e., innovation driven by latest-greatest fever rather than thinking through how particular approaches might either shed new, and important, light on old problems or open up new problem areas.

That’s not to say that you can’t “pick wrong” on the quantification front. Some people think statistical inference via sampling techniques is worthless and that only experiments tell us anything interesting. Others think experiments never say anything worthwhile about ongoing political processes. And game-theorists, who do use math, just aren’t getting the kind of traction that proponents of the approach thought they would in the 1990s.

I’m not going to bash large-N or other quantitative studies. Like many Ducks, I don’t find the distinction between quantitative and qualitative research particularly helpful. But I will claim that the triumph of general linear reality (GLR) models in the form of multivariate regression has reinforced small-c conservative tendencies within the field in a variety of ways.

Many quantitative GLR acolytes are convinced — or, at least, publicly express conviction — that they are on the correct side of the demarcation problem, i.e., that. they. are. doing, S-c-i-e-n-c-e. Normal science. Not that stupid “paradigms” wars that wasted our time in the 1980s and 1990s, and certainly not journalism, political theory, non-falsifiable parable telling, or any of that other stuff that is most. definitely. not. Science. And is therefore not simply a waste of our time, but also a shot fired directly at the heart of progress. As in: trying-to-drag-us-back-to-the-dark-ages evil.

My rhetoric may be over the top, but I am not joking. Many perfectly nice, very interesting, extremely smart, and otherwise generous people really do believe that, in blocking the advancement of “alternative” approaches, they are fighting the good fight.** In this paradigm, innovation takes the form of technical improvements; competent work on topics that some percentage of peer reviewers believe to be interesting should be published; and, to be frank, a certain scholasticism winds up prevailing.

Would this be different if another “movement” currently enjoyed an advantage? Probably not. But I do think there’s something — as PTJ has written about — at work among those self-consciously committed to “Science” (and before this was about quantitative methods is was about quasi-statistical qualitative, which should put to rest the notion that we’re talking about numbers) that makes monoculture more likely. I’d feel less worried about this if I saw more persuasive evidence of cumulative-knowledge building in the field — rather than “truths” that are established and upheld exclusively by sociological processes — and if scholars doing even non-standard GLR work had an easier time of it.

3. The De-intellectualization of Graduate School

So what happens when students who enter graduate school:

  • With most of the methods training they will need;
  • Have strong incentives to adopt the “template” strategy for getting a job;
  • Confront a publishing and hiring environment in which methodological deviance is a liability;
  • Receive instruction from at least some instructors who are convinced that there’s a “right way” and a “wrong way” to do social science; and
  • Train in Departments under intense pressure from Graduate School administrators to reduce the time-to-completion of the PhD? 

Answer: an increasing risk of getting an IR degree as a time of intellectual closure; a perfectly rational aversion to debates that require questioning basic assumptions. In short, a recipe for impoverished theorization.

4. Damnit, Don’t We Know Better?

The good news — as many angry graduate students who post on Political Science Job Rumors fail to understand – is that-most of the “better jobs” escape scholars who, no matter how many publications they have, aren’t producing solid middle-range theory. If your work consists of minor tweaks to existing democratic-peace models or throwing variables into a blender and reporting results, then, well, don’t assume that there’s some sort of conspiracy at work when an apparently under-published ABD gets the position that you think you deserve.*** The bad news is that you are increasingly more likely to get hired at a significant subset of institutions than creative scholars who don’t deploy multivariate regression… even if doing so would have been wildly inappropriate given available data and/or the nature of their puzzle.

A number of dynamics work here, but the most distressing involves key dimensions of organized hypocrisy in the field. In particular:

  • We all know that peer reviewing is stochastic–governed by, for example, a surfeit of mediocre reviewers, their transient mental states (‘may your reviewers never read your manuscript right before lunch’), and overwhelmed editors. But we still treat the successful navigation of the slings and arrows of a few prestigious journals as the leading indicator of scholarly quality. Because, after all, why use your own brain when you can farm out your judgment to two or three anonymous reviewers?
  • We all know that quality is not the same as quantity, yet we still wind up counting the number of journals articles as an indicator of past and future scholarly merit.
  • We all know that it is nearly impossible to make an innovative argument and provide empirical support for it, yet we continuously shrink the length of journal articles, demand that the latter accompany the former, and discount “pure theory” articles — thus making it even more difficult to publish innovative arguments. 
  • We all know that the peer-review process is already biased against controversial claims, yet more and more journals default to single-reviewer veto–a decision that makes it even harder to publish innovative work, let alone innovative theory.

These dynamics do, of course, sometimes let innovative arguments through. But it too often distorts them into conformist shadows of their former selves. Note again that these tendency reinforce orthodoxy — whatever that orthodoxy is at the moment.

    5. Conclusion

    I’ve completely lost track of where I began, what the point was, and where I intended to go. But this is a blog, and I have tenure, so I can yell at the kids to get off my lawn… and otherwise rant the rant of the aging curmudgeon. And, just in case you aren’t clear about this: I am overstating the case in order to push discussion along. Get that?

    And, if you didn’t get the moral of this story: I question the judgment of anyone who gets a PhD without developing statistical skills and being able to provide some evidence to committees that he or she has those skills. It. Just. Isn’t. Worth. It.

    Does that make me part of the problem? Maybe. But I think one can hardly look at my record and come to that conclusion.

    Manual trackbacks: James Joyner, Steve Saideman, Erik Voeten.

    ———
    *I do have a pet peeve, however: scholarship that combines multivariate regression with selections from a small menu of soft-rationalist mechanisms… when we are expected to accept the mechanism(s) simply because of widespread invocation in the field. See the overuse of audience-cost mechanisms in settings where the heroic assumptions required for them are simply not credible (get it?).
    **The amount of emotional energy invested on all sides of these disputes is, to be frank, absolutely shocking and appalling.
    ***But, let’s face it, you are correct. Clique dynamics matter a great deal in getting a first job; and given the massively uneven distribution of resources among US colleges and universities, that first job may very well have long-term downstream effects. Of course, we tend to confuse a field in which scholars are frequently born on second base and then advanced to third by a walk with “strict meritocracy,” but that’s another matter. That being said, almost no one actually cares about your “proof” that we’ve gotten the coefficient on the interaction term between trade and democracy slightly off — even if it did land in a “top” journal because, well, see my point number four.