Tag: disciplinary norms

A Global Survey of IR Students – Might be Worth Pitching in your Classes

Daryl Morini, an IR PhD candidate at the University of Queensland whom I know, has put together an interesting global survey for undergraduate and graduate students of international relations. It looks pretty thorough and might make a pretty interesting student couter-point to TRIP. Eventually the goal is an article on our students’ attitudes toward the discipline; here is the full write-up of  the project at e-IR. So far as I know, nothing like this has been done before (please comment if that is incorrect), so this strikes me as the interesting sort of student work we should support. Daryl’s made an interesting effort to use Twitter as a simulation tool in IR, so I am happy to pitch this survey for him. Please take a look; Daryl may be contacted here.

Share

Yes, I do envy physicists

Even the Evil League of Evil has peer review.

One of the laziest sneers directed at us social scientists who use math and statistics in our research is that we suffer from “physics envy.”

Ha! It sounds like penis envy! It’s a quip that will slay them dead around the seminar table!

Well, I do use math in some of my work (to my great surprise). I also have a couple of working papers (soon, inshallah, to be articles) that have lots and lots of tables and graphs.

But that’s not why I have physics envy.

Nor is the source of my envy my shared commitment to an idea that the social universe can be studied exactly like the physical one—that I can generate hypotheses and test them to yield knowledge about the laws of the social world that are akin to the process my seventh-grade science textbook taught me Newton used to understand gravity. (And boy was that book wrong.)

No. The source of my physics envy is the fact that people automatically respect physicists and they have no idea what I do.

Physicists have it easy. Practically nobody who is neither a physicist nor a crank has any firm opinions on the Higgs boson or the speed of light. Even the religious objections have pretty well been dealt with by now (albeit via two processes: persuasion and social coercion). But everyone gets that doing physics is tough work–that it is, in fact, respectable.

On the other hand.

Everyone has an opinion about social science. Often, people have many, many opinions. Sometimes they are ideologues or fundamentalists or autodidacts or otherwise intellectually crimped, and they therefore have a bizarre and unyielding attachment to their ideas. For some reason, though, certain labels–“economist” is the most prominent–nevertheless carry a certain cachet. (Marion Fourcade would note this is mostly only true in the Anglo-American tradition, but, hey, that’s my tradition.) People seem to think that at least some economists do good and useful work, even if they qualify that with terms like “Keynesian,” “Austrian,” “behavioral,” and so on.

But political scientists?

Well, thanks to John Sides and The Monkey Cage, several Washington political reporters have gotten to the point where they think that political science is worthwhile. But that persuasion has not stopped Congress from, essentially, redefining my vocation out of “science”–an act of rhetorical coercion that PTJ would note in some sense mimics what data-driven political scientists did to their more critical colleagues. And at the every day level I note that practically nobody has a good sense of what I do. When I answer the question “what classes are you teaching” by saying “stats, in the spring,” the response I usually get is “What do political scientists know about statistics?”

(Statisticians and methodologists would agree with this Volkisch notion that political scientists know very little about statistics, but for every different reasons.)

One common complaint within the discipline is that APSR has too much stats-driven work (which is really outdated; the quant-for-quant’s-sake work is now in Political Analysis). On the other hand, the most common reaction outside the discipline is that we don’t do stats at all. What’s the source of this incredible disconnect? A lot of this likely stems from the fact that many introductory courses–indeed, entire undergraduate courses of study–are taught like pre-law or current-events surveys, at least when they are not taught like a history of the twentieth century (I’m looking at you, international relations).

We should change that. We should view introductory courses as an opportunity to advertise what it is that political scientists do and why it matters. That means, however, that our departments should continue to restructure their undergraduate curricula even more thoroughly to expose students to what it is that the professional discipline requires. That means, in part, making data and its analysis central to the undergraduate experience. This is not to say that we should make undergrads into junior graduate students. Rather the opposite. We should guide their exploration to make sure they cover the entirety of the field, not specialize prematurely.  But right now we aren’t even training good generalists.

To do otherwise shortchanges our students and it shortchanges ourselves. If political scientists can’t justify the intellectual contributions of our field to undergraduates–that is, if we think it’s okay to teach politics and not political science–then it’s no surprise that nobody knows what we do. And if nobody knows what we do, then how can they respect it?

Advance responses to commenters:

  • I’m sure your department is perfect in this regard; I’m talking about all the other departments out there.
  • Yes, this means that we’ll be squeezing out room for some discussions to privilege others. But choosing a syllabus–and maintaining a discipline–is inherently an exercise in what to privilege. If there are coherent sets of academic scholarship housed in political science departments that have little to do with the work of the overwhelming majority of their colleagues, then we should consider strongly whether it is a good idea for them to be housed in political science. Perhaps the not-uncommon division of “political science” and “international studies” is one that more schools should adopt, for instance.
  • Yes, this means that I think that we should have an expectation that our political science undergraduates should have minimal fluency with basic stats (up to the point of reading and interpreting an OLS table) and minimal fluency with some statistical package (even if that’s only Excel). This has obvious benefits for the discipline–early familiarity with statistical tools is the best inoculation against misusing those tools–and for our marketing of the major (“You’ll learn how to do data analysis!”) Does anyone think that either society or the marketplace will value statistical tools less in the future?
  • Yes, this means a relative devaluation of history and theory, but not an elimination of either. I’m unapologetic about this. On the other hand, these should probably be re-emphasized at the graduate level.
Share

Challenges to Qualitative Research in the Age Of Big Data

Technically, “because I didn’t have observational data.”
Working with experimental data requires only
calculating means and reading a table. Also, this
may be the most condescending comic strip
about statistics ever produced.

The excellent Silbey at the Edge of the American West is stunned by the torrents of data that future historians will be able to deal with. He predicts that the petabytes of data being captured by government organizations such as the Air Force will be a major boon for historians of the future —

(and I can’t be the only person who says “Of the future!” in a sort of breathless “better-living-through-chemistry” voice)

 — but also predicts that this torrent of data means that it will take vastly longer for historians to sort through the historical record.

He is wrong. It means precisely the opposite. It means that history is on the verge of becoming a quantified academic discipline. That is due to two reasons. The first is that statistics is, very literally, the art of discerning patterns within data. The second is that the history that academics practice in the coming age of Big Data will not be the same discipline that contemporary historians are creating.

The sensations Silbey is feeling have already been captured by an earlier historian, Henry Adams, who wrote of his visit to the Great Exposition of Paris:

He [Adams] cared little about his experiments and less about his statesmen, who seemed to him quite as ignorant as himself and, as a rule, no more honest; but he insisted on a relation of sequence. And if he could not reach it by one method, he would try as many methods as science knew. Satisfied that the sequence of men led to nothing and that the sequence of their society could lead no further, while the mere sequence of time was artificial, and the sequence of thought was chaos, he turned at last to the sequence of force; and thus it happened that, after ten years’ pursuit, he found himself lying in the Gallery of Machines at the Great Exposition of 1900, his historical neck broken by the sudden irruption of forces totally new.

Because it is strictly impossible for the human brain to cope with large amounts of data, this implies that in the age of big data we will have to turn to the tools we’ve devised to solve exactly that problem. And those tools are statistics.

It will not be human brains that directly run through each of the petabytes of data the US Air Force collects. It will be statistical software routines. And the historical record that the modal historian of the future confronts will be one that is mediated by statistical distributions, simply because such distributions will allow historians to confront the data that appears in vast torrents with tools that are appropriate to that problem.

Onset of menarche plotted against years for Norway.
In all seriousness, this is the sort of data that should
be analyzed by historians but which many are content
to abandon to the economists by default. Yet learning
how to analyze demographic data is not all that hard,
and the returns are immense. And no amount of
reading documents, without quantifying them,
 could produce this sort of information.

This will, in one sense, be a real gift to scholarship. Although I’m not an expert in Hitler historiography, for instance, I would place a very real bet with the universe that the statistical analysis in King et al. (2008) , “Ordinary Economic Voting Behavior in the Extraordinary Election of Adolf Hitler,” tells us something very real and important about why Hitler came to power that simply cannot be deduced from the documentary record alone. The same could be said for an example closer to (my) home, Chay and Munshi (2011), “Slavery’s Legacy: Black Mobilization in the Antebellum South,” which identifies previously unexplored channels for how variations in slavery affected the post-war ability of blacks to mobilize politically.

In a certain sense, then, what I’m describing is a return of one facet of the Annales school on steroids. You want an exploration of the daily rhythms of life? Then you want quantification. Plain and simple.

By this point, most readers of the Duck have probably reached the limits of their tolerance for such statistical imperialism. And since I am a member in good standing of the qualitative and multi-method research section of APSA (which I know is probably not much better for many Duck readers!), who has, moreover, just returned from spending weeks looking in archives, let me say that I do not think that the elimination of narrativist approaches is desirable or possible. Principally, without qualitative knowledge, quantitative approaches are hopelessly naive. Second, there are some problems that can only practically be investigated with qualitative data.

But if narrativist approaches will not be eliminated they may nevertheless lose large swathes of their habitat as the invasive species of Big Data historians emerges. Social history should be fundamentally transformed; so too should mass-level political history, or what’s left of it, since the availability of public opinion data, convincing theories of voter choice, and cheap analysis means that investigating the courses of campaigns using documents alone is pretty much professional malpractice.

The dilemma for historians is no different from the challenge that qualitative researchers in other fields have faced for some time. The first symptom, I predict, will be the retronym-ing of “qualitative” historians, in much the same way that the emergence of mobile phones created the retroynm “landline.” The next symptom will be that academic conferences will in fact be dominated by the pedantic jerks who only want to talk about the benefits of different approaches to handling heteroscedasticity. But the wrong reaction to these and other pains would be kneejerk refusal to consider the benefits of quantitative methods.

Share

State of the Field, Redux: What’s Wrong with IPE?



There are a few things that make me really hot under the collar. The first is the unending 100+ degree summer heat in central Texas. The second is the unending debate on the “state of the field”, in particular the state of the International Political Economy (IPE) discipline. It is a topic near and dear to my heart (IPE, not Texas heat). A few years ago, I was so provoked by Benjamin Cohen’s trenchant intellectual history of IPE and the reactions that followed that I put together a special issue on the so-called “American School of IPE” in Review of International Political Economy. This was soon followed by a special issue on the British School of IPE, edited by Nicola Phillips in New Political Economy. Finally, in hopes of achieving some closure on all the kvetching and navel-gazing, Nicola and I combined the two special issues and solicited a new round of essays, which came out last year as a book on the Past, Present and Future of IPE. At that point, I decided to stop worrying about the state of the field and return to more rewarding, substantive research.

But Dan’s blog from a week ago on the state of IPE today brought all the angst back. Dan raised a simple, yet powerful question: why have our top journals (specifically International Organization) had so few articles on the global financial crisis? For that matter, why do the top journals have so few IPE articles on anything of real importance to the world economy today?

Rather than stew in my juices and provide a snarky reply, I turned to some of my uber-talented IPE friends with these questions. Here are two great responses I received from Mark Blyth and Thomas Oatley, which I reproduce here, with my thanks.

From Thomas Oatley, Associate Professor at UNC Chapel Hill, author most recently of a great IPE piece in IO, “The Reductionist Gamble: Open Economy Politics in the Global Economy“:

Perhaps no research directly relevant to the American financial crisis has appeared in IO because mainstream American IPE values general knowledge over case-specific knowledge. It believes further that general knowledge is produced through the statistical analysis of large samples. David Singer, in a recent APSA Political Economy section newsletter, nicely summarizes the kind of research this orientation implies. “From a research design perspective, a reasonable way forward is to test hypotheses about the conditional impact of capital inflows on the probability of financial crises in the developed world. The scope and quality of regulation are likely contenders for inclusion in such a model. The cases of Australia and Spain suggest that large capital inflows might be less destabilizing if the banking system faces strict capital requirements and prohibitions against non-traditional banking activities. Other possible conditioning variables include, inter alia, resource endowments, partisanship, and corporate governance.”

So why hasn’t IO published research along the lines Singer proposes? I suspect that such research has yet to appear because standard statistical techniques are not well suited to the complex causality that characterizes banking crises. This causal complexity has two dimensions. The first is equifinality: multiple causal paths produce banking crises. Post liberalization “capital inflow bonanzas” that drove the Scandinavian crises is a different mechanism than the “post-Louvre over-valued yen with abundant domestic savings” mechanism that generated Japan’s banking crisis in the late 1980s, which is a different mechanism than the over-exposure to Greek sovereign debt that underlies current weakness of German banks. All three mechanisms might be different than the “zero private savings, large government deficit and global savings glut of historic proportions” mechanism that caused the US crisis.

Second, causality may be conjunctural. That is, rather than having a consistent effect across cases, the impact of a variable might depend on how it combines with other factors. An over-valued currency on its own may not increase the probability of a banking crisis, but an over-valued currency in combination with surplus domestic savings and a particular regulatory structure may have caused Japan’s banking crisis. Multiple conjunctural causality is challenging for standard statistical techniques, although techniques for managing these challenges do exist (see Bear Braumoeller. 2003. “Political Complexity and the Study of Politics,” Political Analysis 11: 209-233).

Why haven’t quantitatively oriented IPE scholars applied techniques such as Braumoeller’s to the study of banking crises? I think the problem may rest in the rarity of major banking crises. According to Reinhart and Reinhart, only 5 major systemic banking crises occurred in developed countries between 1973 and 2007. If three or four distinct causal mechanisms are at work in these five crises, it will be difficult to find statistically significant configurations among sub-sets of crises.

In short, I would argue that no articles directly relevant to the financial crisis have appeared in IO because the field attaches little value to studying the US crisis in isolation, and the banking crises with which it might share common properties are so infrequent that statistical techniques are unlikely to identify general relationships. As a result, an event of supreme global importance gains very little attention from American IPE scholars.

From Mark Blyth, Professor at Brown University, hard at work on a book about the financial crisis that is bound to be a classic in the field:

There are more than a few IPE scholars who have written about the financial crisis and its aftermath. Its just that they have done so in venues that are not as cumbersome as traditional peer reviewed journals. There are two problems with looking to such journals as venues.

The first is the ‘hit the moving target’ problem. I wrote a piece in 2008 called ‘this time it really is different’ on the 2008 crisis and the EU, and by the time I got editor comments, it had morphed into the Euro crisis. Add publication time-tabling into this and almost anything you can say about this is redundant. By the time you revise it to catch up its redundant again. Economists (as usual) have an advantage over us with sites like the NBER and VOXEU designed to get it out quickly, so they get the press.

The second is the ‘discipline of discipline’ problem. Frankly, younger IPE scholars are taught to work with quant data and not say anything beyond it. That’s the skill set. They are taught to do ‘tractable’ questions. What’s tractable about the GFC? That’s a problem when past data is absolutely no use in discerning future trends beyond broad Reinhardt and Rogoff ‘lets dump medieval Spain and modern France in the same data set and talk about defaults’ approach.

Others can talk about intellectual hegemony and the like, but as someone who has sat on a board for many years, I can say its the submissions or lack thereof the is the real killer. Why aren’t IPE journals publishing crisis work? Possibly because no one is submitting it? Or because its much more bang for the buck and much faster to publish in Foreign Affairs or on line?

One last thought. All journal submissions need to be tied into disciplinary debates in order to pass the sniff test at a journal. So what is the debate that the crisis ties into that IPE has a track record on? US decline (got that wrong several times)? Institutional change (most popular models are all about incremental change while the world gets smacked by a Black Swan every week)? Diffusion? (of what, panic)? Human Rights and Trade? (relevance?)

The fundamental problem is that IPE imagines a world quite unlike the one we actually inhabit much of the time. As a consequence when we are asked to comment on the world we actually inhabit, we have little to say.

Finally, I should note that there are in fact some great works out there by IPE scholars that directly hit on the current global financial crisis. I won’t try to be comprehensive, in fear of overlooking several obvious examples, but I’ve read (or re-read) three in the past month that are simply terrific: Herman Schwartz’s Subprime Nation; Randy Germain’s Global Politics and Financial Governance, and Eric Helleiner, Stefano Pagliari and Hubert Zimmermann’s (eds) Global Finance in Crisis.

If anyone out there can point to other great sources – in journals and books – please send in your suggested readings list.

Share

On Paradigms, Policy Revelance and Other IR Myths


I had every intention this evening of writing a cynical commentary on all the hoopla surrounding Open Government, Open Data and the Great Transparency Revolution. But truth be told, I am brain-dead at the moment. Why? Because I spent the last two days down in Williambsurg, VA arbitrating codes for a Teaching, Research and International Politics (TRIP) project (co-led by myself and Jason Sharman) which analyzes what the field of IR looks like from the perspective of books. It is all meant as a complement to the innovative and hard work of Michael Tierney, Sue Peterson and the TRIP founders down at William & Mary, who have sought to map the field of IR by systematically coding all published articles in the top 12 peer-reviewed disciplinary journals for characteristics such as paradigm, methodology, epistemology and policy relevance. In addition, the TRIP team has conducted numerous surveys of IR scholars in the field, the latest round capturing nearly 3000 scholars in ten countries. The project, while not immune from nit-picky criticism about its methodological choices and conclusions, has yielded several surprisingly results that have both reified and dismantled several myths about the field of IR.

So, in the spirit of recent diatribes on the field offered by Steve and Brian, I summarize a few of the initial findings of our work to serve as fodder for our navel-gazing discussion:

Myth #1: IR is now dominated by quantitative work

Truth: Depends on where you look. This is somewhat true if you confine yourself to the idea that we can know the field only by peering into the pages of IO, ISQ, APSR and the like. Between 2000-2008, according to a TRIP study by Jordan et al (2009), 38.8% of journal articles employed quantitative methods,while 30.4% used qualitative methods. [In IPE, however, the trend is definitely clearer: in 2006, 90% of articles used quantitative methods — see Maliniak and Tierney 2009, 20)]. But the myth of quantitative dominance is dispelled when we look beyond journals. In the 2008 survey of IR scholars, 72% of scholars reported that they use qualitative methods as their primary methodology. In our initial study of books between 2000-2010, Jason and I found that 58% of books use qualitative methods and only 9.3% use quantitative (the rest using mainly descriptive methods, policy analysis and the rare formal model).

Myth #2: In IR, it’s all about PARADIGMS.

Truth: Well, not really. As much as we kvetch about how everyone has to pay homage to realism, liberalism, constructivism (and rarely, Marxism) in order to get published, the truth is that a minority of published IR work takes one or more of these paradigms as the chosen framework for analysis. Surveys reveal that IR scholars still think of Realism as the dominant paradigm, yet realism shows up as the paradigm of choice in less than 10% of both books and article. Liberalism is slightly more prevalent – it is the paradigm of choice in around 26% of journal articles and 20% of books. Constructivism has actually overtaken realism, but still amounts to only 11% of journal articles and 17% of books in the past decade. Instead, according to the TRIP coding scheme, most of the IR work is “non-paradigmatic” (meaning it takes theory seriously, but doesn’t use one of the usual paradigmatic suspects) or is “atheoretic”. [Stats alert: 45% of journal articles are non-paradigmatic and 9.5% atheoretic, whereas books are 31% non-paradigmatic and 23% are atheoretical).

So, Brian: does IR still “really like” the isms?

Myth #3: Positivism rules.

Truth: Yep, that one is pretty much on the mark. 86% of journal articles AND 85% of books between 2000-2010 employed a positivist methodology. Oddly, however, only 55% of IR scholars surveyed report to see themselves as positivists. I’m going to add that one to the list of “things that make me go hmmmmm…..”

Myth #4: IR scholarship is not oriented towards policy.

Truth: Sadly, true. Only 12% of journal articles offer policy recommendations. [Ok, a poor proxy, but all I had to go on from the TRIP coding system]. Books are slightly more likely to dabble in policy, with 22% offering some sort of policy prescriptions – often quite limited and lame in my humble coding experience. Still, curiously, scholars nonetheless perceive themselves differently. 29% of scholars says they are doing policy-oriented research. This could be entirely true if they are doing this outside the normal venues of published research in the discipline and we’re simply not capturing it in our study (blogs, anyone?). All of which begs several questions: are IR scholars really engaging in policy debates? If so, how? Where? If not, why not? (Hint: fill out the next TRIP survey in the fall 2011 and we’ll find out!!)

(Note to readers: I was unable to provide a link to the draft study that Jason and I conducted on books, as it is not yet ready for prime time on the web. But if you have any questions about our project, feel free to email me).

Share

Write less. Read more.


I thought I would offer my take on the series of posts below on how to reform the peer-review system. Part of the problem is bad reviews. We can remove the anonymity of them, but that raises the obvious problems (bad blood, retaliation, pulling punches, speaking truth to power, etc). A better solution would be to rely on the expertise of editors. A really stupid review is obvious to even non-experts. And if one reviewer says X and the other says ‘opposite of X’, then you should know that you have to get a third opinion, not just reject the piece for not garnering sufficient support from the beginning (this is Dan’s point).

Why can’t editors do this? The biggest problem seems to be workload. They just don’t have the time to carefully sort through reviews and evaluate them as critically as the paper under consideration. But how do we lessen their workload? Write less. Read more. This goes for graduate students and professors. Graduate students are “socialized” earlier these days and this generally means they are pushed to publish. The tenure standards are higher everywhere now. None of this is good for the field. All of this means too many papers that don’t tell us very much. And they make editors’ jobs impossible.

Political science and IR today reminds me of elementary education. Most everyone in educational psychology seems to agree that kids really shouldn’t be pushed early on, that the social part of early grade school is more important than the academic, that they aren’t cognitively ready to do proper book learning. Yet we expect more academically of our little ones than ever before. They screen them in my town for their competence in the ABC’s upon entering kindergarten, then tell us what to work on over the summer. Go f*&k yourself.

Most everyone also agrees that some kids, particularly boys, develop more slowly intellectually, but of course eventually catch up. And also that it is better to think creatively rather than to do rote learning. Yet homework starts in first grade and the first thing they focus on is penmanship. Ugh.

Without saying that grad students are just kids, we have to recognize that there is a growth process in graduate school, and frankly even in the assistant professor stage. By pushing younger academics to write too early and too much, we deny them to chance to read widely and think more creatively. We also encourage people to be carbon copies of their advisors because if people don’t have a chance to develop their unique voice they will merely parrot their mentors. The metric of success becomes a simple process of counting journal articles and where they are placed, not WHAT THEY SAY THAT IS NEW AND INTERESTING. This gets to some of my earlier complaints in Stuff Political Scientists Like, that we fetishize new data to the detriment of new ideas.

This is partly because the people who are doing the judging (tenured faculty members, journal reviewers, search committees) are themselves writing too much and don’t have time to properly read and critically evaluate other people’s work. We have created an academic environment where everyone is in his or her own bubble. No wonder we all get (and write) lousy reviews. I personally don’t keep up with the journals as I would like to and this makes me sad.

Maybe I am idealizing the (fairly recent) past, and maybe Berkeley was just different, but when I was in grad school, not so long ago, grad students didn’t judge academics by how many APSR articles they had published, but by their ideas. Oh, Schweller, he is the ‘balance of interest’ guy. Oh, Moravcsik, he is the ‘liberal intergovernmentalist.’ I frankly had no idea about how much people had published, only what they had published.

Of course, this probably cannot be changed because it is an arms race. I can’t tell my students to slow down, to stop and think, because they will get cut off at the knees by the more ‘socialized’ grad students in other programs on the job market. Or they might not get tenure at a university that simply counts beans. I find this not only sad but also deleterious to the discipline. I can’t do much about it, but my kids are going to spend the summer in the yard, not with workbooks.

Share

Robert Keohane the Situated Scholar

I have found an interesting counter-example to my earlier lament about disciplinary norms restricting open reflection by IR scholars about their personal trajectory and history as it relates to their work. It is Robert Keohane’s recent interview on UC-Berkeley’s “Conversations with History” series in which he expounds the myriad personal, cultural, social influences that have informed and shaped his research.

Keohane’s willingness to expound on his personal relationship to his subject matter is not limited to public interviews, of course, but constituted a chapter in his book International Institutions and State Power. While the chapter was not an example of Kingdonian activism – that is, it was not an attempt to account methodologically for his personal influence over the subject matter of his study – I think this does meet Jim Rosenau’s criteria for “situating the scholar” in the world. And I continue to think this is an example to be revived and diffused among younger scholars as well.

Share

Kingdonian Activism?

This is none too pithy, but tonight (this morning?) I’m just going to toss up a whole stream of ideas that have been percolating since the first BioNote discussion and took shape while I was attending the ISA conference in San Francisco. They relate to a very problematic final chapter I’m struggling over now in which I attempt to account for the role I’ve played (by researching a non-barking dog) in inadvertently getting the dog to bark.

Many of us do this of course – interface with the real world in ways that put us inside the subject matter we’re studying – but there are not good formulas in IR scholarship for reflecting about this explicitly (much less analyzing one’s own actions as part of our dataset) as we write up our results.

(This ties into the earlier “Bionote” discussion in two ways. First, bionote norms, I’ve argued, are simply an example of how IR encourages people to think of themselves as observers rather than participants in the worlds we are studying. Second, the autobiographical nature of the ISA panel about which I blogged was due to, not a deviation from, this set of norms. The panel was organized as an autobiographical set of war stories from policy work by academics precisely because as Janice Gross Stein pointed out, autobiography is all we have to go on because we don’t bother to do empirical studies of how we “bridge that divide” and with what consequences.)

But why don’t we? I think it is due to the very trend I think I was trying to articulate before – IR scholars have some stake in keeping up the pretense that we exist separate from international affairs. We’re encouraged through various disciplinary norms to occlude a serious empirical analysis of the way our role in conducting research, especially if we do elite interviews, to say nothing of blogging, writing op-eds, or consulting, affects the processes we’re studying.

To be sure this is probably truer of some projects than others. I’ve concluded it’s quite true of my work on “dogs that don’t bark” in transnational advocacy networks (namely the absence of children born of war rape on the human rights agenda) and I’m trying with some difficulty to account for this as I complete my current manuscript.

Consider this anecdote from the concluding chapter:

“In Spring of 2006, I presented my preliminary findings regarding the non-emergence of “children born of war” at University of Pittsburgh’s Research in International Politics (RIP) monthly brown-bag. In such circles, heavily dominated by empirical approaches, one does not present normative theory – that is, value-laden arguments about how the world should look, or policy-oriented sets of recommendations about particular problems. Rather, one identifies puzzles about the world and then goes about solving them by applying or modifying existing theories. Theories in this sense being lenses said to explain and predict major patterns in world affairs.

Therefore, I had organized this particular paper not as a problem-focused human rights argument about children, but rather as an empirical study on “issue non-emergence” within advocacy networks. I presented the subject of “children born of war” as a negative case and demonstrated why, from the perspective of agenda-setting theory this might be considered an interesting puzzle. The case, I argued, showed that we needed a different understanding of the obstacles to issue emergence.

My colleagues provided a variety of suggestions on the theory, the methods and the structure of the argument. But one piece of advice particularly sticks out in my mind. “You’d better stop talking to international organizations about this issue until you publish,” said one of my senior colleagues. “Otherwise, before you know it you will no longer have a puzzle to explain, because these children will be on the agenda.”

Note two things about this comment. First: the idea that more attention to this population should have been less preferable to me (or anyone) than the ability to advance my career by publishing an interesting paper. Second: the acknowledgement by my colleague that in researching the non-emergence of “children born of war,” I was in fact engaged in “issue entrepreneurship” myself that could alter the research findings.

If the previous chapters illustrate anything, it is that the process of researching human rights is in fact intimately connected with the practice of constructing human rights in and around a variety of policy arenas. In other words, far from existing outside their subject matter, human rights intellectuals are part of the human rights movement and actively (if inadvertently) shape it. But my colleagues did not advise me to explicitly account for this factor in my research or discuss the academy as a source of momentum or resistance to new human rights issues. To do so would have been to breach certain professional norms within epistemic communities of political scientists, norms that suggest that “real” research is distinctive from advocacy.

As explained briefly in chapter one, I have sought to exploit the recursive relationship between academics and practitioners methodologically. The research process for me has consisted largely of poking around the human rights regime asking questions about what is not on the agenda and asking practitioners to justify their answers. This has provided insight into the regime itself, as well as its silences and the cultures within which different practitioners move.

However such a method does constitutes a notable, if modest, agenda-setting function in its own right. Simply raising a new issue in a conversational setting ‘makes people think’ and stirs up dialogue. Such communicative action can lead to organizational innovation. It also introduces the individual researcher to the network of gatekeepers who can stop an issue from emerging, as well at to those ‘true believers’ who might push for it. The practitioners may come to see the researcher, through these discussions, as an expert on the substantive topic. The researcher might be invited by true believers to consult or share findings with the practitioner community.

The choice of whether and how to exercise this role has implications both for the research and for the organizations under study, and eventually for the population of concern. It is therefore impossible and irresponsible to pretend that the research process itself has not influenced the very communities of practice we study. Acknowledging this has required me to reflect on my own role in the human rights network, and that of like-minded colleagues and of academia as a whole, as part of the subject matter of the book.”

But how to do so systematically? In developing a literature review to ground the remarks in this chapter, I re-read with some interest Patrick Jackson and Stuart Kaufman’s Perspectives on Politics piece “Security Scholars for a Sensitive Foreign Policy.” I liked this article because it represented an example of IR scholars self-consciously engaging with the real world, and theorizing about the process of doing so. To wit, the authors document their role in SSSFP, an advocacy effort led by IR scholars to illuminate cause and effect relationships regarding the Iraq war while remaining politically agnostic, thus maintaining scholarly credibility. According to Jackson and Kaufman: “Weberian activism… is an appropriate stance for scholars who wish to engage in debate on public issues.”

However I found their application of “Weberian Activism” too limiting to inform the problem I’m facing, since the authors’ careful incursion into political activism comes at a much later stage of the policy process. In other words, their model cannot inform the work of scholars studying cause and effect relationships in agenda-setting – that is, how “public issues” get constructed in the first place, and particularly why some get framed off the public agenda altogether. Any scholar who “pokes around” a policy domain trying to analyze this part of the policy process, even if s/he limits herself to standard methodologies and avoids “open advocacy” like the plague, will influence the thoughts of policy-makers and possibly affect the very outcomes s/he is studying despite his/her best efforts.

And yet the argument Jackson and Kaufman make reifies the very divide between academic and policymakers about which we might be so usefully explicit. The entire article is an exercise in finding some way to reconcile the authors’ different “hats” as researchers of world politics and participants in world politics. They come down on the side of privileging their scholarly identities (maybe because they were writing this article for a scholarly journal?), even though doing so meant essentially abrogating the possibility of being effective in their activist exercise. At any rate, such a solution would be impossible if they were seeking to “encourage broad acknowledgement of facts and problems” in an area where no policy debate already existed, because in such case the research itself would play an agenda-setting role that could not be summarily negated through the logic of Weberian activism.

Probably, what we need in such cases is to acknowledge what might be referred to, following Stephen Krasner as “Kingdonian Activism.” OK, I coined this term, not Krasner, but he inspired me this week at ISA. On the “Bridging the Theory/Policy Divide” Panel at ISA about which I blogged earlier, Stephen Krasner argued that bridging the gap is the wrong framework. Instead we should be using Kingdon’s garbage can model to think about how academia interfaces with politics. Krasner suggests academics are just one group among many contributing to what Kingdon describes as the policy “soup.”

The question of our moral responsibility as persons with civic identities to deliberately engage as activists in such cases is an important issues, but it is not the subject of this post. What I’m interested in here is our responsibility as scholars to account for our role, however inadvertent, in influencing the policy processes we are studying.

Here is a more recent example of what I mean. The Campaign for Innocent Victims in Conflict (CIVIC), about whose work I blogged approvingly not so long ago, constitutes an excellent case study for my ongoing work on the dialectic between issue entrepreneurs and establishment advocacy organizations. Since I’m interested in why new ideas do or don’t get picked up by lead actors in advocacy networks, I’m curious to follow this campaign, trace CIVIC’s strategies over time and watch to see what works for them, what doesn’t and why.

However, insofar as policy elites or those within a degree of separation from them do read this blog, my blog post itself arguably has become part of the socio-political dataset on the construction of CIVIC’s platform. How / to what extent / must I account for this in my research? Do I analyze the post and comments to it as part of the total dataset? If so, should I interview myself or at least reflect on / be transparent about my motivations in posting, as I would interview others responsible for content I’m analyzing? Should I code myself as a issue entrepreneur or issue advocate for discussing this campaign, on blogs, conference panels or in the classroom? Or should I avoid any mention of such issues I’m studying in my personal or professional conduct until my research is complete?

It doesn’t matter, because even if I had never blogged about CIVIC, the very process of conducting interviews with people about issue construction plays a role in constructing issues. When I interviewed the CIVIC Executive Director, the conversation itself allowed her to think through the organization’s strategy in ways that may have changed her thinking. And interviews with gatekeepers about issues they’re ignoring force them to justify non-policies, which helps me understand their thought process but also places me momentarily in an issue-entrepreneur role. (I’ve noticed this particularly in my children born of war research – when I talk about my work on BBC, for example, I’m usually contributing to awareness raising about an understudied human rights problem, so my comments would arguably become part of the documentary record I’m then supposed to be tracing.)

Dan Drezner had an answer to this question in his comments last November at the “Who Are the Global Governors?” Workshop at George Washington University. His take on it is that researchers “just aren’t that important” in the global policy cycle. If true, then attempting to include our own impacts in the analysis comes off as vain and narcissistic (as many comments to an earlier post suggest).

But a lot of famous global norm entrepreneurs have been academics, and the whole epistemic communities literature documents the way in which scientists influence the policy process. Bridging the theory/policy divide may not be easy or prevalent in our discipline, but it is done, and in the absence of serious empirical studies of how it’s being done and with what effect/side-effect, it’s hard to know how to do it properly and where the methodological tradeoffs lie.

Perhaps those of us in this position should be trained to recognize it and to adopt participant-observation methods explicitly and transparently. This is standard practice in sociology, but rare in IR. For example, Peter Haas’ work on epistemic communities draws on conversations he has as a participant in global policy processes, but he does not spend time in his scholarly outputs discussing how his presence inside his subject matter influences his findings and impacts the politics he’s studying.

The problem is that people like myself (and, I think Peter) want to be reasonably positivist in our work – in other words, we’d like to observe phenomena and analyze them in a valid, objective, replicable way. But we also want to observe phenomena that we can only be observed from the inside out. Our observations of the phenomena itself may be reasonably empirical; but our observations of our own interactions will be necessarily interpretive.

I can see several possible solutions within our discipline to this seeming need to either degenerate into interpretivism or to deny our role in our subject matter in order to perpetuate an illusory dichotomy between positivist and reflectivist approaches.

1)Recognize and legitimate the complexities outlined above and incorporate ethnographic methods into standard IR methodology training. Some standard criteria would need to be developed for judging the circumstances under which a scholar should know they need to do this in order to retain credibility, since not all projects call for it.

2)Delegate the role of analyzing one’s own interface with one’s subject matter to a third party. In the case of my current manuscript then, I would write up the analysis of the human rights regime as I observed it as an academic, but I would turn over my field notes, published work on the topic, correspondence with political players, briefing notes and slides, consulting records, any evidence of an impact by me on the policy domain I’m studying, to one or more people – coders if you will – whose assignment would be to evaluate how much I had shaped the politics of an issue simply by researching it in such a way that the findings would be independent of my own subjectivity and, ideally, reliable across observers. Often the impact found would no doubt turn out to be minimal; but where significant the author could then refer to some evidence other than their own judgment.

Am I making any sense here, or are these merely the rantings of a seriously jet-lagged assistant professor nervous about including a “radical” book chapter in a manuscript to a university press, late at night, after a day of transitioning back from ISA with needy children, and too many glasses of red wine?

Reactions if you please.

Share

Beyond the Bio Note

I quite liked an article I read in International Studies Review a couple of years back, a conversation between Ersel Aydinli and James Rosenau, published under the heading “Courage vs. Caution: A Dialogue on Entering and Prospering in IR.”

One of the most thoughtful exchanges in the “Dialogue” has to do with academic norms regarding full disclosure of our identities as scholars / persons.

Rosenau argues, defending a “commitment to explicitness” exemplified in his concluding Distant Proximities chapter, that scholars should situate themselves in the world they are writing about so as to allow readers to better assess and evaluate their arguments:

“Some might see that chapter as overly egotistic, but I see it as living by the commitment to explicitness. It is true that all too few analysts proceed in this way. One is hard pressed to find a book, or even a paragraph, in which the author sets forth the personal background factors underlying his or her work… though it would be standard procedure to have at least a paragraph in a preface that tells the reader where the author is coming from.”

Aydinli goes on in the same piece to articulate Rosenau’s point more forcefully:

“Perhaps we should consider a disciplinary movement to encourage our members to develop and expand the currently accepted genre of the ‘author’s bio note’ into something more revealing and explicit than simply affiliation and research interests. I would like to see, for example, some indication of the author’s past history, such as wehre they have workd and lived. Has the author remained all of his or her life in one place? Did he or she take a break along the educational path to join the Peace Corps, live abroad, or work in a different field? I also think it would be valuable to know about some of the author’s non-professional affiliations or interests. Of course it would be up to the individual author to determine how many or which of these affiliations to provide, but even that choice would be revealing to the readers and help them interpret the content of the text… authors [might also be] encouraged in their texts to indicate how they came to choose the research topic or particular questions they investigate. Was it simply a personal interst or were there pragmatic issues involved such as a future grant? Was the topic of global or current scholarly interest or something sparked by a dinner table conversation?”

I quite like this idea. I think it would make our research far more objective, and help us evaluate one another’s work far better if such a norm of full disclosure took root. It might also help us acknowledge and make sense of our presence in the worlds we study, something which Jackson and Kaufman grappled with, for example, in their Perspectives piece on Weberian activism; and which I am grappling with now as I develop a concluding book chapter on children born of war that accounts for my influence on my own subject matter.

But I also know from first-hand experience that Rosenau is right: there is currently no such norm. Which is becoming scarily apparent to me as I complete my book, written with various efforts to follow Rosenau’s advice, and now have to peddle it to mainstream IR presses who will no doubt insist I edit that kind of quasi-narcisisstic reflctivism right out.

Even efforts to bend in that direction just slightly result in disciplining moves from academic gatekeepers. (Don’t know about you readers, but the last time I submitted my author’s bio to a leading IR journal with mention of my children and predilections for science fiction – matters with, I argue, some bearing on the topics I study and the methods I use – I was told it wouldn’t fly.)

And perhaps rightly so.

But perhaps not. Thoughts?

Share

© 2019 Duck of Minerva

Theme by Anders NorenUp ↑