[I wrote the bulk of this post very late at night while suffering a bout of insomnia. In the end, I ran out of energy and called it quits. Thus, I’ve edited the post for the purpose of clarity and style. Major content updates are in blue text (bad idea, now abandoned].
[2nd Update: I called this post the poverty of IR Theory not the poverty of IR. There’s a difference.]
PM’s post on getting into political-science PhD programs continues to provoke spirited debate. Of particular note is reaction to his claims (echoing Dan Drezner) about the importance of mathematical and statistical skills. As “Evanr” writes:
It sounds like you think these people emerge ‘ex nihilo’ as scholars at the top of the field. At one point and time they were ‘new graduate’ students too, and were very much made the way they are by virtue of their training. I don’t think Wendt would have produced the scholarship he did without the non-mathematical influence of Raymond Duvall [nb: Bud started out his career doing statistical work; Alex was also trained by David Sylvan, whose work extend to agent-based modeling]. Is it not worthwhile to look at the training of top scholars to see how we should shape current students?
There may be a vast consensus – I’m not sure if there is – that specific forms of training are indispensable to graduate students, but this consensus may be wrong. It sounds like your recommendations are more about reproducing the orthodoxies of the field to make oneself a marketable candidate than they are to intended to produce thoughtful, innovative scholarship. In the short term this may give you an edge in entering the field, but it may also make for lackluster career advancement. With the exception of certain ‘citation cartels’, thinking like everyone else is not a great way to get published.
Having spent far too many years on my Department’s admissions committee–which I currently chair–I have to agree with part of PM’s response: it is simply now a fact of life that prior mathematical and statistical trainings improves one’s chances of getting into most of the first- and second-tier IR programs in the United States. But that, as PM also notes, begs the “should it be this way?” question.
My sense is that over-professionalization of graduate students is an enormous threat to the vibrancy and innovativeness of International Relations (IR). I am far from alone in this assessment. But I think the structural pressures for over-professionalization are awfully powerful; in conjunction with the triumph of behavioralism (or what PTJ reconstructs as neo-positivism), this means that “theory testing” via large-n regression analysis will only grow in dominance over time. I’d also caution some of my smug European and Canadian friends that the writing is on the wall for them as well… albeit currently in very small print.
I should be very clear about the argument I develop below. I am not claiming that neopositivist work is “bad” or making substantive claims about the merits of statistical work. I do believe that general-linear-reality (GLR) approaches — both qualitative and quantitative — are overused at the expense of non-GLR frameworks–again, both qualitative and quantitative. I am also concerned with the general devaluation of singular-causal analysis.
Indeed, one of my “problems” in IR is that I am probably too catholic for my own good, and thus don’t have a home in any particular camp. My views are heavily inflected by my time with high-school and college debate, which led me to a quasi-perspectivist view of theoretical explanation: different kinds of work involve different wagers about what “counts” as instruments, knowledge, and results. Different kinds of work can and should be engaged in a debate about how these wagers, but that doesn’t mean we shouldn’t also evaluate work on its own terms. Thus, I get excited about a wide — probably too wide — variety of scholarship.*
What I am claiming is this: that the conjunction of over-professionalization, GLR-style statistical work, and environmental factors is diminishing the overall quality of theorization, circumscribing the audience for good theoretical work, and otherwise working in the direction of impoverishing IR theory. As is typical of me, I advance this claim in a way designed to be provocative.
1. Darwinian Pressure
We currently produce more PhDs than there are available jobs in IR. We produce far more PhDs than exist slots at the better-paying, research-oriented, well-located universities and liberal arts colleges in the United States. Given this fact of life, consider the following two strategies:
- Challenge your professors; adopt non-standard research designs; generally make trouble.
- Focus on finding out the template for getting the best job; affirm your professors; adopt safe research designs.
The first strategy can work out; it sometimes works quite well. But it more often fails spectacularly. Given some of the trends in the field (reinforced themselves by over-professionalization — we face a series of feedback loops here), the first strategy is much riskier than it was fifteen years ago. PhD students may be a variety of dysfunctional things, but stupid generally isn’t one of them. It isn’t much of a surprise, then, that an ever-growing number of them choose the second pathway.
2. Large-N Behavioralism Triumphant
Consider, for a moment, the number of critical, post-structuralist, feminist, or even mainstream-constructivist scholars who hold tenure at A-list and near A-list IR programs in the United States. How many of these programs have more than one tenured professor doing these kinds of work? Still thinking, I bet.
How many of them have multiple tenure-track professors working in this idiom? I can think of a few, including Cornell, George Washington, Ohio State, and Minnesota. But that’s not a lot.
How many have multiple tenure-track professors doing quantitative work, particularly in open-economy IPE (PDF) and JCR-style international security? There’s no point in enumerating them, as virtually every program fits this description.
How many exclusively qualitative scholars–critical, neopositive, or whatever–have gotten jobs at A-list and near A-list schools in the last five years? Not many.
Now recall my stipulation that most PhD students in IR aren’t stupid; most figure out pretty quickly that failure to develop strong quant-fu immediately (1) precludes one from getting a significant number of jobs but (2) closes the door on very few job opportunities. After all, very few members of search committees will say “well, that applicant’s dissertation involves a multivariate regression, I think multivariate regressions aren’t proper social science, so I’m going to block him.” But, I’m sad to say, many members of search committees will refuse to seriously entertain hiring someone who doesn’t use lots of numbers–unless some sort of logroll is underway.
Now add the fact of exponentially increasing computing power. Combine that with (1) nifty statistics packages that do a lot of the work for you; (2) data sets that, although often junk, are widely accepted as “what everyone uses”; and (3) the “free pass” we too often give to using inappropriate-but-currently-sexy statistical techniques. What we’ve got is a recipe for monoculture and for the wrong kind of innovation in statistical methods, i.e., innovation driven by latest-greatest fever rather than thinking through how particular approaches might either shed new, and important, light on old problems or open up new problem areas.
That’s not to say that you can’t “pick wrong” on the quantification front. Some people think statistical inference via sampling techniques is worthless and that only experiments tell us anything interesting. Others think experiments never say anything worthwhile about ongoing political processes. And game-theorists, who do use math, just aren’t getting the kind of traction that proponents of the approach thought they would in the 1990s.
I’m not going to bash large-N or other quantitative studies. Like many Ducks, I don’t find the distinction between quantitative and qualitative research particularly helpful. But I will claim that the triumph of general linear reality (GLR) models in the form of multivariate regression has reinforced small-c conservative tendencies within the field in a variety of ways.
Many quantitative GLR acolytes are convinced — or, at least, publicly express conviction — that they are on the correct side of the demarcation problem, i.e., that. they. are. doing, S-c-i-e-n-c-e. Normal science. Not that stupid “paradigms” wars that wasted our time in the 1980s and 1990s, and certainly not journalism, political theory, non-falsifiable parable telling, or any of that other stuff that is most. definitely. not. Science. And is therefore not simply a waste of our time, but also a shot fired directly at the heart of progress. As in: trying-to-drag-us-back-to-the-dark-ages evil.
My rhetoric may be over the top, but I am not joking. Many perfectly nice, very interesting, extremely smart, and otherwise generous people really do believe that, in blocking the advancement of “alternative” approaches, they are fighting the good fight.** In this paradigm, innovation takes the form of technical improvements; competent work on topics that some percentage of peer reviewers believe to be interesting should be published; and, to be frank, a certain scholasticism winds up prevailing.
Would this be different if another “movement” currently enjoyed an advantage? Probably not. But I do think there’s something — as PTJ has written about — at work among those self-consciously committed to “Science” (and before this was about quantitative methods is was about quasi-statistical qualitative, which should put to rest the notion that we’re talking about numbers) that makes monoculture more likely. I’d feel less worried about this if I saw more persuasive evidence of cumulative-knowledge building in the field — rather than “truths” that are established and upheld exclusively by sociological processes — and if scholars doing even non-standard GLR work had an easier time of it.
3. The De-intellectualization of Graduate School
So what happens when students who enter graduate school:
- With most of the methods training they will need;
- Have strong incentives to adopt the “template” strategy for getting a job;
- Confront a publishing and hiring environment in which methodological deviance is a liability;
- Receive instruction from at least some instructors who are convinced that there’s a “right way” and a “wrong way” to do social science; and
- Train in Departments under intense pressure from Graduate School administrators to reduce the time-to-completion of the PhD?
Answer: an increasing risk of getting an IR degree as a time of intellectual closure; a perfectly rational aversion to debates that require questioning basic assumptions. In short, a recipe for impoverished theorization.
4. Damnit, Don’t We Know Better?
The good news — as many angry graduate students who post on Political Science Job Rumors fail to understand – is that-most of the “better jobs” escape scholars who, no matter how many publications they have, aren’t producing solid middle-range theory. If your work consists of minor tweaks to existing democratic-peace models or throwing variables into a blender and reporting results, then, well, don’t assume that there’s some sort of conspiracy at work when an apparently under-published ABD gets the position that you think you deserve.*** The bad news is that you are increasingly more likely to get hired at a significant subset of institutions than creative scholars who don’t deploy multivariate regression… even if doing so would have been wildly inappropriate given available data and/or the nature of their puzzle.
A number of dynamics work here, but the most distressing involves key dimensions of organized hypocrisy in the field. In particular:
- We all know that peer reviewing is stochastic–governed by, for example, a surfeit of mediocre reviewers, their transient mental states (‘may your reviewers never read your manuscript right before lunch’), and overwhelmed editors. But we still treat the successful navigation of the slings and arrows of a few prestigious journals as the leading indicator of scholarly quality. Because, after all, why use your own brain when you can farm out your judgment to two or three anonymous reviewers?
- We all know that quality is not the same as quantity, yet we still wind up counting the number of journals articles as an indicator of past and future scholarly merit.
- We all know that it is nearly impossible to make an innovative argument and provide empirical support for it, yet we continuously shrink the length of journal articles, demand that the latter accompany the former, and discount “pure theory” articles — thus making it even more difficult to publish innovative arguments.
- We all know that the peer-review process is already biased against controversial claims, yet more and more journals default to single-reviewer veto–a decision that makes it even harder to publish innovative work, let alone innovative theory.
These dynamics do, of course, sometimes let innovative arguments through. But it too often distorts them into conformist shadows of their former selves. Note again that these tendency reinforce orthodoxy — whatever that orthodoxy is at the moment.
5. Conclusion
I’ve completely lost track of where I began, what the point was, and where I intended to go. But this is a blog, and I have tenure, so I can yell at the kids to get off my lawn… and otherwise rant the rant of the aging curmudgeon. And, just in case you aren’t clear about this: I am overstating the case in order to push discussion along. Get that?
And, if you didn’t get the moral of this story: I question the judgment of anyone who gets a PhD without developing statistical skills and being able to provide some evidence to committees that he or she has those skills. It. Just. Isn’t. Worth. It.
Does that make me part of the problem? Maybe. But I think one can hardly look at my record and come to that conclusion.
Manual trackbacks: James Joyner, Steve Saideman, Erik Voeten.
———
*I do have a pet peeve, however: scholarship that combines multivariate regression with selections from a small menu of soft-rationalist mechanisms… when we are expected to accept the mechanism(s) simply because of widespread invocation in the field. See the overuse of audience-cost mechanisms in settings where the heroic assumptions required for them are simply not credible (get it?).
**The amount of emotional energy invested on all sides of these disputes is, to be frank, absolutely shocking and appalling.
***But, let’s face it, you are correct. Clique dynamics matter a great deal in getting a first job; and given the massively uneven distribution of resources among US colleges and universities, that first job may very well have long-term downstream effects. Of course, we tend to confuse a field in which scholars are frequently born on second base and then advanced to third by a walk with “strict meritocracy,” but that’s another matter. That being said, almost no one actually cares about your “proof” that we’ve gotten the coefficient on the interaction term between trade and democracy slightly off — even if it did land in a “top” journal because, well, see my point number four.
I fall under the “generally make trouble” category, and I’m fine with it. I like to think outside of the box, which is what I thought graduate school was partially about. If that means I never get tenure at some prestigious university (or even a smaller research school), then so be it. If I never get to work in academia, oh well. I just see no reason for me to prepare myself for scholarship I find dull. I can’t imagine too many more things worse in the professional world than spending 30 years, working as hard as professors do, just so I can produce research I don’t care about.
Good post by the way.
I’m not sure we can assume that everyone who goes on the job market today is going to end up in the US. I know nothing about the hiring patterns of foreign universities — but are all those new programs in the Middle East, Singapore, etc. ALSO discriminating in favor of those who perform quantitative analysis? Is your strategy number two predominantly for people who want to end up in US institutions — or does it work across the the board?
Middle range theorists are less worried. Or so I think anyway: https://t.co/pDQaQzd4
I want to respond to Mary and Hulley22, but this is a broader response. There seems to be a weird “solipsism of the dispossessed” at work here. Even as the biomass of IR converts from its 1980s-era equilibrium (lots of words! badly misunderstood history! throw-weights!) to its current equilibrium (regressions! sexy quant! ZOMG DID YOU HEAR ABOUT ENTROPY MATCHING!), the reaction from commenters such as this seems to be not a desire to confront both the quantoids’ assumptions head-on but to retreat into the blissful land of “Europe” and “non-top tier schools.” Such refuges are constructed as idylls, where, presumably, everyone sits around reading Foucault, gently laughing at the foibles of Stata users, and publishing in inter-multi-trans-disciplinary journals.
That is, of course, not true.
There’s a reason why top-tier jobs are desirable and why people who fear disciplinary marginalization should worry that they are being transformed into an intellectual monoculture. Professors in those billets wield influence far beyond their numbers, since they are the 1% in terms of resources, which yields high productivity and the ability to form networks that perpetuate their “tree”–their network of grad students. Running away is not a solution or an admission of defeat: it’s a forfeiture.
You can’t pretend that yours is the truer, deeper, worthier path, or that you will toil in blissful innocence of the temptations of careerism. One of the major points of adulthood is realizing that nothing is ever so innocent–and another is realizing that bills have to be paid, so tradeoffs have to be made.
That means that spending 5 or 7 or however many years in graduate school without intending to use it in a profession–academia only one among many–is an exercise not only in futility but in the active denial of reality. Academia, as PTJ and Max Weber have said, is a vocation. But it is *not* an avocation.
And just as that other great traditional vocation, ascending to the ranks of the clergy, means not that you spend your time thinking about theology and theodicy and all the other nice problems of the seminary but instead working on the real issues of your parish (including soliciting the donations to keep the doors open), so does being a part of a grand intellectual discipline mean engaging with the standards and the real practice of your colleagues.
Dan,
I’m pleased to see you saw my post as worth responding to and take your comments in kind. Let me reiterate though that I not dismissive of mathematics and related forms of training in IR, I merely question the value of organizing the vast majority undergraduate and graduate training around it.
The interesting thing about your post however, is that you never responded to my second point against PM. If mathematical training is so integral to professional success in IR why do we see comparatively little of it in IR’s top ranked scholars? Look at the top 10 ranked scholars in America, where your argument should hold the strongest, and tell me how many of them have mathematics as integral to their research program (https://irtheoryandpractice.wm.edu/projects/trip/TRIP%202011%20RESULTS%20US%20RESPONDENTS.pdf). Now tell me how many of these people have tenure?
The problem I see with the discussion by PM and yourself is that it’s – somewhat ironically – disconnected from evidence of what is taken as the best scholarship in the field. At best it is based on anecdotal accounts of what counts as the ‘consensus’ over graduate training and what hiring committees will accept. I recognize that at one point these are two separate things: early marketability and potential long term career success – and I have feeling you are absolutely right about the former. However, they also blend together in that a more radical (and therefore risky) research program may have major payoffs in the future, including tenure. In this respect it may be worthwhile asking some of the individuals who started on the institutional ‘outside’ and now occupy the institutional ‘inside’: would they have told their past selves that it would have been better off to stick to the orthodoxies of the field?
Although I recognize that as lowly graduate student I stand in the deep, deep shadow of your superior grasp of the field, I feel pretty confident in contesting the proposition that large N Behaviouralism will become triumphant in the United States and beyond. Looking at the TRIP 2006 and 2012 surveys, those in the United States who reported primarily using quantitative methods only increased from 22% to 23%. An increase mind you, but an increase of only one percent. You may be looking at a triumph in the very, very long run. What’s more is that outside the United States the impact appears even lower. Just 6% of Canadian scholars in 2006 reported using quantitative analysis for their primary research methodology (recent results not yet available). In short, far more people in IR use non-quantitative methods, and I haven’t seen any evidence of any major shift in the discipline. I guess I am asking for some statistics on your point about the importance of statistics; although I accept with my meager statistical training I may have misinterpreted the numbers.
Lastly, I don’t think that your rhetoric is over the top in suggesting there are some (key word:some) who exist within an intellectual mono-culture who hold a clearly demarcated view that what they are doing is science and everything else is bunk. The problem is that these people can be wrong, and wrong in ways that are profoundly unscientific. The persistent view that political actors are rational entities where emotions are a temporary abnormality is a case in point – this view of emotion is profoundly contested by research in neuroscience (Mercer, 2006; 2010). Yet these profoundly unscientific viewpoints become the bread and butter of the discipline and go on to dominate the orthodoxy of the discipline. The risk then of an inward looking’ scientific’ project is not simply an intellectual mono-culture, as you put, but a mono-culture that is simply wrong.
Cheers,
P.S. I accept your comments about Wendt’s training, but let’s be clear, he specifically thanks Duvall in the context of the ‘Minnesota School’ in the acknowledgement section of STIP, the work that won the best best of the decade from ISA. In his interview on Theory Talks interview he says: “I think the person who most influenced me was my graduate school advisor, Raymond Duvall atMinnesota, who introduced me to Marxism and post-structuralism in IR in the 80s.” (https://www.theory-talks.org/2008/04/theory-talk-3.html). If you want to make the case that mathematical training had any real significant influence on Wendt, you need to make a more substantive case.
By your standards, then, we should abandon this mammalian lark and embrace Jurassic phenotypes, because the dinosaurs were also very successful a long time ago.
I think we may need to question any inferences to be drawn from that TRIP survey. The scholar who produced “the most interesting work in the last 5 years” was Wendt… in the 2000s. So the modal response was a person who’s “last five years” oeuvre was this:
2008 “Sovereignty and the UFO” (with Raymond Duvall), Political Theory, 36(4), 607-633.
2006 “Social Theory as Cartesian Science: An Auto-Critique from a Quantum Perspective,” in
Stefano Guzzini and Anna Leander, eds., Constructivism and International Relations:
Alexander Wendt and his Critics, Routledge, pp. 181-219.
2005 “Agency, Teleology, and the World State: A Reply to Shannon,” European Journal of
International Relations, 11(4), 589-598.
“How Not to Argue Against State Personhood,” Review of International Studies, 31(2),
357-60.
2004 “The State as Person in International Theory,” Review of International Studies, 30(2),
289-316.
2003 “Why a World State is Inevitable,” European Journal of International Relations, 9(4),
491-542.
Seriously. I personally can’t think of anything less interesting than UFOs and quantum theory…
I appreciate snark as much as anyone, but Brad had a point. That’s a terrible question. The most interesting part of the response is that rather than grapple with what is actually a difficult question, we embrace a very loose, bordering on indefensible, definition of “five years.”
Maybe if one of the sucks is a closet survey expert, they’ll be inspired to offer a blog post enumerating all the potential problems in that short question.
err, that should read “one of the ducks.”
I would curse the person who gave us the qwerty keyboard, but being one letter off on an abcd keyboard might have been even worse…
Yeah, right. We know you were just demonstrating your appreciation for snark. :)
A little more seriously: I’m not sure I understand your understanding of Brad’s ‘point.’ He lists more than the most recent 5 years’ work of Wendt’s, but what’ s the basis for imputing that understanding to any of the respondents who answered that question? In the last 5 years, Wendt has published, according to his vita, work on UFOs, quantum theory, and systems science, including two new articles and an edited book. Seems plenty interesting. I don’t recall how I answered this question on the 2011 survey – knowing me, mine was probably among the long tail of lesser known individuals – but I find Wendt’s work incredibly ‘interesting’ and within the confines of the responses allowed (as I recall there are very few open-ended questions on the TRIP survey, which is indeed a problem) I don’t see what’s so weird about that. Since you’re the one who apparently has something to say on this topic, why don’t you go ahead and say it.
Just killing time before a meeting. I don’t have the time. Basic criticisms point to: not all respondents are 100% on top of recent publications, never mind pipelines, working papers, dissertations, etc. The question of who’s been doing the most interesting work in the last 5 years is a question that only a small fraction of respondents can fairly answer. And its probably only of interest to a small fraction of respondents. I’d guess that the great majority were stretching the definition of five years – to a get a bigger sample of work, with which they’re more comfortable.
If the 5 year time frame question is a problem then stick with 20 time span questions (Q43&44). And the point was not to valorize Wendt’s work (I believe there are plenty of valid criticisms), but illustrate that the top scholarship doesn’t necessarily involve mathematical techniques.
Yeah but the question wasn’t measuring which research is “interesting” in an objective sense. The question is measuring the opinions or judgments of the respondents. Surely the respondents are the most qualified to provide their own opinions? And surely no one takes the answer as anything other than the collective opinions of the respondents in the sample? Surely I don’t.
At the risk of inflicting increasingly narrow comment boxes on ((carefully presses d)) Duck readers . . . The problem is not that respondents define “interesting” for themselves – it’s that they define “five years” for themselves.
If you look at the various surveys, there is little difference between the 20yr and 5yr results. There is also precious little difference between 5yr results in different surveys.
My understanding, based on discussions with Mike, is that the goal is something like getting the academic community’s pulse on “the most interesting new work” in IR.
If that’s the case, then they should just phrase the question that way and be up front about differing interpretations of “new,” rather than making the question overly specific.
For the reasons I mentioned earlier, I doubt that most respondents are answering the question as it was intended. What respondents seem to be answering is “over the past five years, which scholars’ work has interested you?” That would explain the near complete lack of variation in responses over different surveys, and between 5 and 20 year intervals within surveys.
And, to go all meta and come full-circle, it might also underscore Dan’s original point about the paucity of innovation in IR.
This makes no sense at all. How can you or Brad possibly infer by the fact that a survey respondentsaid they thought Wendt’s last 5 years of work were interesting that what they really meant was the last 20 years of work?
P.S. Disqus ducks. Oops, I meant, sucks.
Started to make onelastresponse, butI can’t putup with this1/4inch box. I’ll respond above,instead.
Respondingto myself, because the comment box was just obnoxious if I continued the trend…
I’ll say it. I don’t believe the majority of TRIP respondents are familiar enough with work produced in the last 5 years to offer a meaningfulresponse.
In response, they list more or less the same people that they have considered interesting for the past 20 years.
I really didn’t want to get involved in naming names. I thought I could get away with leaving that to Brad. It seems I can’t. My case in point wouldn’t be Wendt but, rather, Nye. Soft power was old news by 2008. The leadership book hadn’t yet been published, and the rethinking soft power book was still being written, if it had been begun at all. Nevertheless, Nye’s work continued to be influential, and it doesn’t surprise me that he was commonly listed as having produced the most interesting work in the field from 2003-2007.
Because the question people answered was actually “which scholars’ work did I find interesting in the last 5 years.”
Which simply isn’t going to vary much from “which scholars have been most interesting over the last 20.”
I just think it’s a wasted question. I have a feeling we’ll agree to disagree.
I transferred from one top 25 that was heavily, almost exclusively quantitative Phd program after one year to another top 25 department with a then healthy mix of quant/qual. Ten years on, that second program is now dominated by quants, having been colonized by them.
Some people vote as blocks–there is an abundance of scholarship supporting this. Many quants are much more interested in these sorts of issues, so to no one’s surprise, many also tend to let their work inform a lot of their substantive political behavior, apparently forgetting (or not knowing) Hume’s argument against allowing method determine one’s normative position. My experience, albeit small N, with quants tends to support this claim. But Dan’s larger point concerning hiring tendencies is also supported if one just pays attention to hiring patterns over the last couple of years.
Do quallies do the same thing? Sure, ask anyone at the New School the last time they hired a formal theorist. Maybe quants have historically been more inclined to vote as blocks or be hostile towards work that does not look like theirs, I can’t demonstrate that with much certainty. But the confidence with which many large-N people dismiss qual work is really over the top. Often referring to the legitimating authority of “scientific” as opposed to “unscientific” claims, they dismiss qual work (and hires/students) out of hand. It is this confidence (arrogance?) that I find most troubling, as it is rather “unscientific” if one really gets down to it.
In addition to New School, look at other qual/critical-oriented depts like Johns Hopkins or Hawaii and ask when’ the last time they’ve hired a non-critical oriented and/or non-quant scholar as a TT faculty.
The point is that scholars in many (but not all) R1 depts prefer to hire someone with a similar epistemological, theoretical, as well a methodological preferences that predominate within that specific dept. It’s not what I (as well as others) would have liked, but unfortunately that’s just the way hiring (and sometimes even grad student admission) is conducted in so many PS/IR depts :-(.
Your comment deserves a much longer response. I will say that I guess I’ve succeeded, insofar as both statistics-types and alternative-methods types think this post is an attack on them, when it is basically a lament about template-oriented professionalization and some crude hypotheses about how we got here… as well as argument in favor of getting math skills.
Anyway, you may be right that my orientation is toward the “elite” schools and that this profoundly biases my analysis.
Also, I wasn’t suggesting that Bud is anything but what he is…. just thought it worth mentioning what he started out doing. Ted Hopf also started out doing large-n behavioral work.
There may be a different way to frame the issue. The starting point for many of these debates about going to graduate school and being successful there and beyond is thinking about the advice one might want to give to a senior or a young professional who asks about whether to enter a PhD program. If the student has a burning interest in certain kinds of questions about society, if she is going to spend time working on a question or a group of questions no matter whatever else she might do, then thinking about entering a PhD program isn’t a completely crazy thing to do. But, even if the questions that energize the student or relatively recent graduate seem to be all about international relations as studied in political science, that might not be the best thing to suggest to her. There are other ways to study most of the things that are studied by US IR today, in sociology, development studies, economics, women’s studies, history, philosophy (lots of that global ethics stuff these days), and certainly a half dozen other fields. In some of them the academic (and other) job prospects are better than in political science, in some they are worse. In some fields the current methodological strictures are narrower than in US IR; in some they are broader. The best advice may be to find the best match between the questions you are passionate about and the current concerns of the PhD research field. One thing that I think we do know is that fields, as well as scholars’ interests, can change quite a bit from one decade to the next. I’m pretty sure that if behavioral economics and the kind of experimental work that Esther Duflo is doing had existed when I was an undergraduate (a long time ago), I would have done everything I could to get into a really good economics program. Instead, some good advice from my economics advisors led me to what was then a very open and exciting field, IR.
Sorry Dan, I’m going to take issue with your…well it’s not a claim really, not sure what it is. But….:
“Because, after all, why use your own brain when you can farm out your judgment to two or three anonymous reviewers?”
I can’t speak for all journals, but this is a distortion of the process (not that I’m going to defend every aspect of the double blind system). But what would you put in it’s place? Farm out your judgement? This sounds very much like you have enough confidence in you own judgement that you’d like ‘authority’ to decide what get published or not. Maybe the Dan Nexon Review of International Studies? Do we really want editors only publishing THEY pieces they like? Don’t answer that we already know it happens in some journals. This really reduces the commitment (generally free) that reviewers make to doing reviews; and tbh, my experience is that yes you get the arse-holes, but most people are actually very professional.
I can tell you, indeed I have, that if we get a review back that we think is unfair, inadequate, and so on, we address it. I can also tell you that if I was making all the decisions myself (rather than based on the referees judgements, and an assessment of them) then the EJIR would look very different; but you know, it’s not my journal, and being an editor, is both a privilege and a responsibility, and the journal is not my personal plaything.
Beyond that, and not being in the US, I don’t know how bad it it, but come to our panels at ISA to see what’s happening to theory. And incidentally, I originally wanted to call it the end of Theory, not the ‘End of IR theory’: hope you get the difference, because I suspect on this we might agree.
Fixed the quotation for you.
I agree that I’m engaging in a bit of hyperbole here. And you’re right that there’s no costless fix to the problems built into the process. You and I have discussed this issue on a few occasions, in no small part because you’re doing an excellent job running a journal and because I aspire to edit a journal in the not-so-distant future.
But even the best peer-review processes are fraught (I’ve provided links to studies of this in past posts), and I don’t think we internalize just how fraught they are into the way we do business in the field. I do get a sense — from multiple discussions over the years with people doing every kind of imaginable work — that there’s widespread recognition of how much chance goes into the process, yet that recognition doesn’t bleed over into a lot of our scholarly practices.
Some of this is well taken. However, (as much as I hate the analogy) graduate school is learning how to paint by numbers. Most graduate students do not have big, controversial, discipline-shaping ideas. Some do, many think they do, but most really don’t. Most people’s first book out of their diss is not their magnum opus. Once one has more gray hairs, more perspective on the discipline, and a deeper perspective on politics, the big ideas can come forth (I’m still far from that point in my career).
Graduate school is about learning how to conduct research, and producing a dissertation that demonstrates some competence with theory and methods. Yes there are certain orthodoxies, but there is still a rather large menu of techniques to choose from. Once a student can show they have mastered a basic set of skills–that are widely seen to be ‘industry standards’–they get their PhD. After tenure (+ N years), the real creativity can start–if one has some novel perspective.
This is why an artist like Picasso is amazing. It’s not that he CAN’T paint a proper guitar or vase. He knows the “rules” of painting. He chose to break convention, but only after he mastered some basic skills. Getting a PhD is about mastering certain basics; lots of “non-standard” stuff is sometimes good but mostly crap. Get the essentials down, then improvise.
Agreed, but what are the “essentials”? Part of the point here is that we should build creativity into the grad process. Yes, demand technical competence, but also foster rigorous creativity. Otherwise, in 20 years, the field is stuck with a lot of technicians and few expansive intellects.
And Picasso? Never finished his formal training. No doubt this helped his creativity and competence.
As someone who came into IR via interests in theorizing (call it the old paradigm wars), and then exited it because I realized too little theory is used, I have mixed thoughts (Oprah has feelings) about your post.
I fully agree that a peculiar type of large-n analysis has taken over some sections of the field. Anecdotally, I can point to the insurgency and civil war literature, but also conflict studies in general. This has lead to some graduate students leaving IR for Comparative because they were uninterested in large-n analyses and refused to drink the cool-aid on “do this or you won’t get a job” mantra. This has also lead to the closure of certain journals to IR scholars who do qualitative work. Both these points are highlighted in the post.
However, the qualitative IR scholars are not blameless. Some senior IR scholars (more Foreign Policy than IR) do not understand any theory construction. Here, I do not mean rational choice or statistical models, I mean simply qualitative research design and methods. They publish articles with anecdotal evidence, elite interviews, and refuse to share data they allegedly have.
Also, quite a few IR scholars are involved in consulting, and the reader has to simply trust that their consulting does not affect findings that may not be replicated. In Economics, declaring consulting responsibilities is becoming the norm, but IR scholars who make life and death decisions do not need to.
Thus, large-n analyses becomes more attractive given difficulty in replication, lack of rigorous theory testing/building, and no declaration of possible alternative loyalties.
One person’s “over-professionalization” is another’s under-professionalization. I suppose I want a world where I can wonder at the stars but professional astronomers are expected to be able to do things like model space and measure things. There is nothing stopping people from having big ideas. Certainly, if one depended on graduate school for this, one would be a late bloomer, at the very least. Perhaps being a professional begins with deep thoughts, not ends with it.
There is a revisionist sort of history being practiced here (and also by Perestroikans), at the same time as selective nostalgia. When the subject of orthodoxy is brought up, it is as if free intellectual love was being practiced until someone opened Pandora’s statistical package. Innovation happens at the periphery; quantitative and formal methods were certainly not orthodoxy two generations ago, if indeed they are today. Political science was hostile to these methods, especially in IR. It was not that long ago that a reviewer could openly condemn a paper for using “numbers.” If today there is more reticence about these criticisms, then we are to the point where more artful justifications must be used.
I am also struck by the selective treatment of privilege. Is a graduate of Columbia University not the beneficiary of all sorts of disciplinary largesse? Dan and I overlapped at CU for a few years. He is a very smart and talented scholar. I am sure he would reject the notion that I had my appointment because I ran regressions while his placement at G-town was pure merit. Yet, that is the tenor of the argument. We all want to believe our work is important and that reviewers are the ones deficient when they reject our work. Sometimes this is the truth, but it is a corrosive position to take that the review process is flawed, especially when such a position is clearly liable to be self-serving. Controversy is harder to publish because it is controversial. It is also (in my humble experience) better cited once in print. That is a tradeoff I can live with.
Erik Gartzke
Egartzke: I think it is interesting that you refer to astronomy, but the metaphor raises some interesting questions. 1) Do professional astronomers measure and model the universe in the absence of theories (i.e. “wondering at the stars” or “big ideas”)? I’m sure you’d agree the ideal relationship should be heuristic, but maybe you believe that empirical analysis can somehow precede (even an implicit) theory. 2) Have professional astronomers thought through all of their big ideas before even getting to graduate school? I really doubt it. Perhaps being a professional means taking time to engage in theorizing throughout one’s career or a division of labor.
Erik, thanks for some very strong counterarguments. A few points:
1. The endnote dealing with the effects of privilege on the market is independent of methods and methodological issues. Among regular PSJR posters one finds the refrain that the “top schools” (whose composition is much debated) place undeserving ABDs in the best jobs. Oddly enough, this argument is (or, at least, was) often conjoined to the claim that the “top schools” were biased toward paradigms rather than proper empirical analysis. In the main text I was pushing back against this claim: most of the “hot” IR job prospects out of top schools (a) do large-N work, (b) are not primarily doing “paradigm” work, but (c) make interesting IR theory arguments. But in the endnote I agree that “having the right pedigree” confers all kinds of advantages. I would include *myself* as someone who has benefited from attending rather highly-ranked undergraduate and graduate institutions. I doubt that my undergraduate grades would have enabled me to get a PhD had I not attended Harvard, and I’m pretty sure that attending Columbia gave me a huge advantage in getting a very desirable non-top-ten position.
2. I wonder if you’re reading, as Erik V. did, my definition of over-professionalization as “lots of quantitative skills” rather than “too much pressure to spend all of grad school making yourself a marketable commodity.”
3. I do not mean to suggest that things were great in the old days. I don’t want us to go back to time when math illiteracy was a justification for rejecting articles, but I also think that we are in a time when it is too difficult to get non-neopositivist work (qualitative or quantitative) published in journals that make or break careers in the United States.
4. My views on peer review are not primarily shaped by my own experiences as the subject of review, but on (a) reading reviews that come back on manuscripts that I also reviewed, (b) spending a fair amount of time reading studies of the peer review process carried out in cognate and natural-science disciplines, and (c) a decade of collecting “war stories” [both good and bad] from the peer review process. I’ve posted on these topics before, but I’ve become increasingly convinced that Arthur Stinchcombe’s decades-old argument was prescient: as publication in peer-reviewed journals becomes more important for success in a discipline, the average quality of reviewing will decrease. IIRC, this was based on very simple mathematical analysis. I don’t think there’s anything corrosive about pointing this out, and I’m a bit sick of hearing people dismiss legitimate concerns about the process on the grounds that the complainer is just bitter about having an article rejected. That being said, I am not arguing for its abandonment. As I mentioned to Colin, every remedy for a particular problem brings with it a new problem. But I think we need to be very careful about the amount of weight we place on landing a top-tier journal article, particularly for scholars at a stage in which they’ve submitted maybe 2-3 articles in their lives.
Minor quibble: the problem is not, and never has been, “statistics.” The problem is neopositivism, which a good majority of “qualitative” types endorse, not quite realizing that endorsing a methodological strategy focused on the testing of hypotheses about cross-case covariation leads, almost inevitably, to larger n quantification. “Small-n” is not a methodology, but more of a lifestyle choice: “I want to be a Scientist but I am not good at math, so I’ll compare across three cases instead of across 30000.”
More important quibble: there are many decent jobs out there in the US for people who don’t do large-n neopositivist work if you like to teach and don’t care about living in a major urban area. Of course, they’re not at top-50 institutions, don’t come with 0-1 or 1-1 or even 2-1 courseloads and massive amounts of research money, and have none of the other prestige perks that elite schools offer. Plus they don’t pay the kind of salaries that elite schools pay. But it is simply not the case that one needs to get advanced statistical training to get a job in IR in the US, and I have a number of my own PhDs floating around now with tenure-line jobs — and a couple with tenure! — to prove it.
That said, depressing forecast: it’s going to get harder to survive in US IR without doing neopositivist work, because the top journals are not publishing a lot of stuff that isn’t variable-based hypothesis-testing of one sort or another, and the poverty of US PoliSci grad school training in Big Thinkers and Big Questions (even when Dan and I were at Columbia in the mid-1990s, the version of IR that we were taught kind of went “in the beginning there was Waltz, and then there was a lot of neoliberal institutionalism, which constitutes progress,” unless you got the variant which went “in the beginning there was classical realism, and then there was a lightly systematized form of classical realism, which constitutes progress” — it depended on just who was teaching the basic IR course) means that we are increasingly breeding a scholarly generation that knows a lot about technique but not a lot about substance. Eventually those people will control hiring even at teaching-intensive institutions, because there won’t be anyone left.
Solution: split IR off from US PoliSci. PoliSci is a lost cause, APSA is boring, the APSR isn’t worth the price of the paper it’s printed on and most of the other top PoliSci journals are unreadable unless you enjoy wading through equations, and the entire accumulated knowledge and wisdom of the entire project of US PoliSci is basically zero, unless you care a whole lot about the dynamics of legislatures and elections in established democracies. (And you have to care a whole lot, because the innovative stuff is really, really fine-grained; the rest is Political Campaigning 101 and Legislative Procedure 101.) Very occasionally stuff shows up in US PoliSci that bleeds through from other, more interesting and innovative areas of inquiry, but the time delay is like listening to the radio in upstate New York: anything that’s a current hit up there is old news in the city.
There are interesting theories about politics, and about international politics, and about world politics. Few if any of them are native to US PoliSci. Fortunately for IR as a scholarly enterprise, it’s a lot bigger than US PoliSci.
“That said, depressing forecast: it’s going to get harder to survive in US IR without doing neopositivist work, because the top journals are not publishing a lot of stuff that isn’t variable-based hypothesis-testing”
The allure of neopositivism does run deep.
“the APSR isn’t worth the price of the paper it’s printed on…”
I don’t spend a lot of time with APSR either. However, some of the pol. theory articles are worth a glance and occasionally APSR runs an interesting IR article. For example, I’ve mentioned this piece before on my blog and I’ll be mentioning it again.
sorry for the double post. not sure how that happened.
Re: “Guest” – I can agree with you that the question may be wasted. I just don’t know how you’d ask the same question in any other way. The question isn’t who “is” the most interesting, but who people “think” are the most interesting. Whether there is a strong correlation between the way they’d answer if primed to think in 5 year increments v. 20-year increments is something you can only determine by asking the question. Based on person experience, I am not so convinced, but you may be right.
What I’d prefer is to be able to look at the long tail of answers. Whether it’s Wendt or Nye or someone else, it’s no surprise that the person with the most “votes” here is one of the same individuals who is considered most influential. But that’s also the least interesting part of the answer. I want to know who besides those individuals also received some mention. My beef with the TRIP survey is not that they ask questions like this but that they make it difficult for people to provide (or access data on) nuanced answers.
I consulted with MT and the W&M folks a bit about these questions, based on that conference at ISA that you were also at last year. They/we tried to phrase that five year question with the idea of trying to get a sense of who people thought were doing the most interesting work NOW. The idea behind separating it from the 20 year question was so that people would get credit for their older work in that 20 year timeframe, but the people who got credit in the 5 year timeframe would be based solely on that work. Now that’s impossible, obviously, since how most people read the present work depends on our impression of the past work in many cases, but that was the goal.
My sense, and I have not seen the specific data (and I want to see the tails as well), is that the TRIP people will try to modify the question next time to test and see whether or not the large correlation between the 5 year increments and the 20 year increments is because people genuinely think, for example, that Wendt’s work in the last 5 years (exclusively) really was that good, or whether people are being lazy. So hopefully there will be evidence to resolve this the next time around. . . . .
The purpose of studying quantitative methods (statistics and formal models) is to make logical analysis and derive coherent conclusions from certain premises. This is exactly what this blog-post lacks (see EGartzke’s reaction for some of the flaws). I think you could hardly provide a better case for the need for further professionalization.
The fact that this comment (quite nonsensically) places statistics and formal models in a general category called “quantitative methods” provides a great case for a) the need for more serious education in methodological questions in the field in general, and b) the problems inherent in a facile use of terms like “logical analysis” and “coherent conclusions” as synonyms for “stuff that makes sense to me.” Or, more bluntly: come on, Anon1, make a counter-argument instead of resorting to derisive name-calling.
You don’t need quant methods to do logical analysis and derive conclusions. What do you think philosophers have been doing for centuries. Try this old chestnut.
Premises:
1.All men [sic] are mortal
2.Socrates is a man
Conclusion
3. Socrates is mortal.
It’s logical and the conclusion was derived from the premises; oh and no quantification was involved.
Correct. The problem is that the phenomena most social scientists study nowadays are not so simple. For studying complex things like causes of war, effects of international law, etc., you need a squared mind. Some people may have it on their own. Good for them. Most will just benefit enormously from some quant/formal training. This was my point.
Said more simply, look at how many works in IR theory suffer from sample selection bias, omitted variable bias or endogeneity. This just shows the lack of proper training in econometrics.
Alternatively, think how many works develop hypotheses that are not coherent with their own theoretical frameworks. Formal modelling may dramatically help to address this problem.
I’d just like to note that this started off for me as a curiosity about an apparent discrepancy between the recommended graduate training and the composition of the field.
But from reading the responses I now feel that I have a more pessimistic view of IR in the United States than ever before :(
Hope to see you all at ISA next week…
The problem, Eavanr, is that US IR is terribly provincial (see Tom Biersteker’s analysis of syllabi at top-20 IR programs in the US, which is in the fantastic Waever and Tickner edited volume on IR around the globe), but the IR field itself is global. That’s the discrepancy.
Much better having graduate students write nonsensical and incoherent pieces (like this one published on Duck of Minerva?: https://duckofminerva.blogspot.com/2011/06/does-menstruation-explain-gender-wage.html ) and then witness their inability to reply to the comments of the readers, right?
This is becoming a parody.
I’d love to engage in a discussion with you on this. I somehow suspect we wouldn’t be that far apart. (BTW, absolutely love your book.)
So, where to start? Well, I think Daniel Verdier’s question to me when I was in my second year at Chicago was right on target? “Why are you in graduate school?” My answer at the time: Well, I want to learn more about this theory and that theory and more about history to understand what’s going on.” Daniel’s answer: “You are wrong. You are here to develop your ideas of how the world works.” I did not understand it at the time, but nowadays, I tell all my students. He was 100% correct in that this SHOULD be the fundamental goal of pursuing a Ph.D. I’ll leave it at that for the moment, mainly because they are so many other points I’d like to raise that I won’t be able to finish my ISA paper …..
Thanks. Compliment returned :-).
Well, ISA is coming up…. and it sounds like you’re going….
One more thing. Can we please get past the deification of “theory”, pretty please with sugar on top? *clasping hands, plaintive look?*
What’s wrong with careful, let’s just for the moment make the extreme case, and “theory-free” (I know that’s impossible) data analysis of a large-N data set and establish/suggest new patterns of behavior? Isn’t that an excellent way to stimulate new theory? But this kind of work, while praised as “interesting and deserving of publication in a good journal” really does not get published. We MUST have theory, even if it’s a BOGUS theory. God forbid we find something that makes us go: hmmmm. Now that’s interesting, I would not have expected that, what could be going on here?
So, this post originated as a defense of PM’s argument about how to prepare for grad school (“more math,” “intellectual breadth”) and somehow turned into a synthesis of a line of criticism of contemporary IR.
So a more efficient explanation for a lot of what bugs me about mainline theorizing in the field is driven by the requirement to have capital-t “theories.* I’d never thought about it that way, but it makes sense. Let me work this through: If you’re supposed to have a capital-t theory, then you’re likely to grab off-the-shelf explanations that kinda-sorta fit with the correlational analysis and that everyone accepts because, well, everyone uses them. Is that right? And then those of us who spend a lot of time thinking about theory get frustrated…. Very, very interesting.
Hey! I use both math and history and I predicted we wouldn’t be far apart, what does that tell you? Seriously though, yes, that goes to the core of my own frustration. I’ll try to make some time next week to get back to this. That is … if three papers, four reviews and a new project don’t get in the way.
ooops. “liked” my own comment … Well so much for that. I wanted to give one example of large-N analysis leading to new interesting (??) questions. In my research I found that all, and I mean ALL but two or three Haitian leaders get either imprisoned or killed or are forced into exile when they lose office. This empirical finding leads me to ask “Who the heck would want to be a leader of Haiti, and why?” No theory here, just an empirical pattern established by Large-N analysis, but a puzzle nevertheless and one that could be useful to generate new “theoretical” insight. OK back to the ISA paper.
The way that I see the problem isn’t that what you just mentioned isn’t possible or desirable but that the hegomonization (sorry if this isn’t a word) of neo-positivist statistical work blacks off avenues not only for non-neo-postivists to get jobs but for theoretical development in general. It wouldn’t be the way you explain above is bad but that if that is the only way that work is done, that is bad.
Then, of course the question becomes (or maybe it doesn’t), is this a problem of neo-positivism or is it a problem with any group of scholars who happen to ‘win’ an intra-field battle? I do sometimes wonder if this isn’t more likely (though certainly no peculiar as my own experiences have taught me) among scholars who purport a Cartesian Dualism and therefore tend to believe in ‘progress’ and ‘truth’. If you believe that there is a ‘truth’ and that there is a particular way to get to that ‘truth’, maybe you are more likely to only hire and publish work that is similar to your own. That said, this could also be completely wrong.
So again, I want to state that I think the ‘problem’ isn’t neo-positivism but instead the hegemony of neo-positivism.
Here’s an equally (or maybe more) hyerbolic comment from anarchist anthropologist David Graeber on the cahnging social function of academia: “There was a time when academia was society’s refuge for the eccentric, brilliant, and impractical. No longer. It is now the domain of professional self-marketers. As a result, in one of the most bizarre fits of social
self-destructiveness in history, we seem to have decided we have no place for our eccentric, brilliant, and impractical citizens. Most languish in their mothers’ basements, at best making the occasional, acute intervention on the Internet.”
https://www.thebaffler.com/past/of_flying_cars/print
A broader point is that under conditions of late capitalism, we live in a promotional culture, where we are encouraged to understand ourselves as a brand to be marketed and promoted. Or to put it somewhat differently, while I agree with the overall thrust of Dan’s critical observations, they are part of a more general transformation that has been taking place not only in political science, international relations, and academia, but in contemporary late capitalist (or whatever term you prefer) society. That’s not to say that the way it plays out in particular (professional, disciplinary, etc.) contexts isn’t relevant. Quite the contrary, since these are also (potentially) sites for resistance and transformation (another issue worthy of discussion). But I think it’s important to keep in mind the broader social and political context–not only because it matters, but it also helps prevent a provincial mind-set and navel-gazing.