Tag: theory v. policy

Jarrod Hayes on “Bridging the Gap”

Across the GapEditor’s Note: This is a guest blog by Jarrod Hayes, who is is an assistant professor of International Relations at the Georgia Institute of Technology. His research broadly focuses on the social construction of foreign and security policy. It deals with the International Policy Summer Institute, which has also received coverage at The Monkey Cage.

I have had the great pleasure and honor to attend the Bridging the Gap/International Policy Summer Institute hosted by (the very impressive) American University School of International Service.  The experience has been a rich one, with an amazing cohort and substantial depth and scope from the speakers.  Out of that depth and scope, a point really caught my attention.  A few of the speakers have anecdotally noted that policymakers often think using the concepts and logics of theory, but they are unaware that is what they are doing.  This is a really fascinating point, and potentially one of the ways that scholars might be able to ‘bridge the gap’ between academics an policymakers without writing explicitly policy oriented scholarship (although we should do that too).

Continue reading

Share

The Academic-Policy Nexus: a few more thoughts

Regarding my previous post and the very useful comments, first the matter of what do we do once we realize that a policy problem in search of a policy solution is the equivalent of a social scientific puzzle in search of an explanation, for both the solution and the explanation are outcomes. In other words, Step One is to identify the policy problem in question.  Step Two is to search the academic literature for a published study (in book or article form) whose puzzle is essentially identical to the policy problem. For example, the problem of how to end a civil war in Country X is equivalent to the puzzle of how to do so in an academic study.

The explanation of the study is the academic hook to hang the policy solution on. In other words, if there is a published study that explains the outcome of bringing civil wars to an end, this means that the study contains the cause of the outcome and has the evidence to back up the argument thereby matching the cause to the outcome.  Once a study is found it is on to the next step.

Continue reading

Share

Transnational Politics, i(I)r(R) and the Information Age

Today I presented some thoughts on Henry Farrell‘s International Studies Association panel on “Transnational Politics and the Information Age.” The panel, which included Joe Nye, Dan Drezner, Marty Finnemore and Abe Newman, looked at the subject fairly broadly:

Public debates over the politics of the information age have been dominated by a battle between cyberoptimists, who believe that the Internet will lead to a fundamental transformation of social relations and cyberpessimists, who claim that the Internet will either have no effects or harmful ones. These debates partially map onto international relations arguments about the relationship between state power and globalization. Yet there is little work in international relations, which seeks to analyze the relationship between information flows and global politics. This is all the more remarkable given that information politics (whether the dissemination of sensitive government cables by Wikileaks, or the role of new media in the “Arab Spring”) seems to have direct, and sometimes dramatic consequences for central IR concerns. In this roundtable, we bring together scholars to examine in more depth the relationship between information technology and transnational politics. How has the rise of digital networks facilitated cross-border political organization or has it ultimately re-empowered the nation state? In either case, what points of variation exist in the political dynamics that have been unleashed? The distinguished participants offer a range of theoretical and empirical perspectives to this core debate concerning the relationship between information technology and global politics.

In honor of the conference theme, I uploaded my presentation to YouTube.

The discussion afterward ranged all over – topics included wikileaks, cybersecurity, pedagogy, etc. A fair amount of time was spent discussing Kony2012 though and one question that none of us really answered very well was what exactly makes videos go viral, and whether narrative structure matters. After the roundtable Michele Acouto sent me this TED video by tweet which I thought worth sharing.

Share

Scholarship and Advocacy: Bomb Iran Edition (UPDATED)

My colleague, Matt Kroenig, has generated a ton of buzz (and not a little vitriol) for his Foreign Affairs piece in which he advocates imminent US military action against Iran. What’s probably less well known, however, is that Matt and Mike Weintraub, a graduate student at Georgetown, have a working paper in which, as they write:

We argue that nuclear superiority, by increasing the expected costs of conflict, improves a state’s ability to deter potential adversaries. We then show that states that enjoy nuclear superiority over their opponents are less likely to be the targets of militarized challenges. Arguments that contend that a minimum deterrent posture is sufficient to deter militarized challenges do not find support in the data.

As I’ve been discussing with Matt on Facebook, I see a real tension between these findings and claims that a nuclear Iran poses such a grave danger to US national interests that Washington must, as soon as possible, launch a military strike against Iranian facilities. After all, if Matt and Mike are correct then we should expect both that the massive asymmetric nuclear advantage enjoyed by the US will deter Iran, and that Iran’s possession of a few nukes will not greatly alter its behavior.

If I am right, then Matt joins a long line of international-relations academics whose policy advocacy doesn’t entirely cohere with their scholarship. For example, a significant number of offensive realists signed letters opposing the Iraq war, even though their theories suggest that states should, and will, maximize power in the international system.

Given all this, I’m curious what other Duck readers and writers think should be the relationship between academic scholarship and policy advocacy.

Do we have an obligation to be completely consistent across both domains, or do the real differences between our political and academic roles suggest otherwise? How much “policy weight” we should give to any particular academic finding? If we are skeptical of extrapolating too much from one or more pieces of international-relations scholarship, does it matter if that scholarship is our own and we are making the policy recommendations? Does the methodology of the work matter, e.g., if the piece involves a linear regression such that we expect individual cases to be outliers? And what is the comparative “truth value” of our policy advocacy?

UPDATE: Matt weighs in below on the substantive merits. Someone also pointed to a draft of Matt’s forthcoming piece, which I think reinforces the questions I raise, insofar as it is an example of an academic paper with policy recommendations. For example:

Given that the most likely conflict scenarios between these two states would occur in the Middle East, the balance of political stakes in future confrontations would tend to favor Tehran. The brinkmanship approach adopted in this paper concurs that proliferation in Iran would disadvantage the United States by forcing it to compete with Iran in risk taking, rather than in more traditional arenas. On the other hand, the findings of this paper also suggest that the United States could fare well in future nuclear crises. As long as the United States maintains nuclear superiority over Iran, a prospect that seems highly likely for years to come, Washington will frequently be able to achieve its basic goals in nuclear confrontations with Tehran.

As we all agree, Matt’s model points to increased risk. But do such conclusions really support the notion that the United States must strike immediately or face apocalyptic consequences?

Share

Policymakers Just Don’t Understand

Erik Voeten, one of my colleagues at Georgetown, writes at the Monkey Cage that:

International relations, and especially (inter)national security, is the subfield of political science where the gap between policy makers and academics is most frequently decried. This is not because political science research on security is less policy relevant than in other subfields. Quite the contrary, it is because political science rather than law or economics is the dominant discipline in which policy makers have traditionally been trained. In short: there is more at stake.

Erik takes as his point of departure an exchange between Justin Logan and Paul Pillar at the National Interest (itself riffing on a forum surrounding Michael Mosser’s “Puzzles versus Problems: The Alleged Disconnect between Academics and Military Practitioners”).

I understand complaints that much IR scholarship does not seem relevant to the kind of questions policy-makers are struggling with. Yet, incessant complaints about the rigor or difficulty of scholarly work reveal more about policy-makers than about academia. IR theory is for the most part not very hard to understand for a reasonably well-trained individual. The possible exception is game-theoretical work, which constitutes only a small percentage of IR scholarship. My bigger worry is that foreign policy decision makers are avoiding any research using quantitative methods even when it is relevant to their policy area. There is a real issue with training here. My employer, Georgetown’s school of foreign service, at least requires onev quantitative methods class for masters students (none for undergrads). Many other schools have no methods requirement at all. By comparison, Georgetown’s public policy school requires three methods classes. It is not obvious to me why those involved in foreign policy-making require less methods training for their daily work. The consequence is, however, that we have a foreign policy establishment that is ill-equipped to analyze the daily stream of quantitative data (e.g. polls, risk ratings), evaluate the impact of policy initiatives, and scrutinize academic research.

I agree with Erik that policy students lack sufficient methodological training, but disagree with his sole focus on quantitative training. Policy makers are poorly equipped to deal with the daily flood of qualitative data they confront–including data best described as ethnographic, discourse-analytic, and narrative in character. They also need to better understand key social-scientific concepts, particularly those involving cultural phenomena.

Once we move beyond the relatively easy case that foreign-affairs students need better comprehensive methodological training, we really do confront a basic disconnect: many IR scholars–whether in Security or International Political Economy (IPE)–don’t adequately understand the difference between “policy implications” and “policy relevance.”

Showing that, for example, trade interdependence lowers the chances of war has clear policy implications, but it isn’t all that relevant to the specific challenges faced by policy makers.  No careful US decision-maker would ignore Chinese military power (or vice versa) because two states with market economies are less likely, in a statistical sense, to go to war with one another. Or, to take a different kind of example, Erik’s path-breaking account of how UN security council approval enhances the legitimacy of the use of force also has policy implications, but isn’t all that directly useful to policymakers.

Indeed, we need to be careful about the tenor of these arguments, which represent something of a spillover from the journalists-need-to-listen-to-Americanists genre so popular over the last two years. Journalists, and others charged with “making sense” of current events to the public, would do well to pay more attention to political science, sociology, history, and other disciplines. Much of what passes for journalist accounts of, say, electoral outcomes, amounts to repeating back knowledge gained from privileged access to elite conversations [and, to circle back to the need for better methodological training, from polls they don’t know how to interpret]. But those conversations themselves usually constitute flawed “standard stories” (PDF) about political causation.

Or, even worse, they rely on intellectually flabby pundits and “deep thinkers” better skilled at constructing pithy phrases, public marketing, and appearing on television than providing coherent analysis.

Policymakers, of course, also benefit from this kind of “making sense” — and academic knowledge can, and should, play a larger role in that process. But let’s not kid ourselves about the policy relevance of much of academic international studies. And in this context I personally worry more about the danger of the flawed study that makes it through peer review (but that would never, ever, ever happen, right?) and influences policy debates. As an academic based in DC — and one with a small amount of policy experience — I’ve seen firsthand how the lure of “making a splash” via “policy relevant” research distorts the production of academic knowledge. It isn’t a pretty sight for anyone involved.

Finally, I think academics underestimate the degree to which the policy apparatus already has, more or less, in-house academics (in the intel community, for example) that do the range of stuff we do, only with access to classified information. More connections between these de facto academics and de jure academics would probably benefit policymaking, insofar as it improves the range of methods, the implementation of methods, and the diversity of findings making an impact on the production of state analytic knowledge.

Share

A McNamara Syndrome?

Robert McNamara was complex giant in the field. Since his passing on Monday, several prominent IR scholars and practitioners have eulogized his life in a variety of ways. Most authors, however, note the disjointed nature of his legacy—great achievement in modernizing the Department of Defense, but also enormous failure in Vietnam. As someone often consumed by the power of numbers, for me McNamara’s most compelling accomplishment was his dogged persistence in applying the quantitative approaches to management that garnered him great success at Ford Motor Company to the Department of Defense. McNamara was also a (unconscious) believer in rational choice theory, at a time when these concepts were still the abstract vision of a small community of academics. If nothing else, McNamara’s evidence-based decision making was well ahead of its time.

In yesterday’s New York Times Errol Morris, director of the definitive McNamara documentary “The Fog of War,” wrote an excellent op-ed pondering how to remember the man. Morris’ closing remark refers directly his rational mentality, and its ultimate fallibility:

If he failed, it is because he tried to bring his idea of rationality to problems that were bigger and more deeply irrational than he or anyone else could rationally understand. For me, the most telling moment in my film about Mr. McNamara, “The Fog of War,” is when he says, “Perhaps rationality isn’t enough.” His career was built on rational solutions, but in the end he realized it all might be for naught.

This is quite provocative, but I reject the assertion that McNamara faced an irrational world. More likely, the ordered rationality he observed in Detroit was muddled In Washington by the layers of bureaucratic malaise and political absurdity. What Morris does not consider, however, is how the failure of McNamara’s methods permeated the defense establishment, and their consequences. That is, after being humiliated in Vietnam, and using McNamara as a prideful scapegoat, did the defense community develop an aversion to the quantification of warfare? A McNamara Syndrome?

The evidence seems to suggest that this may be the case. Of the prominent U.S. military leaders in the proceeding half-century, only Colin Powell approached McNamara in his desire understand the dynamics of conflict through evidence-based analysis (it should be noted that Gen. Powell was ultimately not rewarded for this approach). What, then, is the role for modern political science in a defense policy community plagued by this syndrome? While there is still an ongoing debate within the discipline as to the value of quantitative versus qualitative methods, the fact is most contemporary discourse in political science—especially in IR—is based on rational choice models, and explained through large-scale quantitative analysis. This creates an extremely problematic paradox: IR scholars reject baseless theory and attempt to explain conflict through simple rational theory and quantitative analysis; however, IR practitioners reject the value of these theories and methods, and attempt to manage conflict through institutional knowledge.

Perhaps McNamara’s most significant contribution is the institutional fear of methodology his failures instilled at the DoD; ultimately resulting in the much lamented gap between theory and and practice in international relations. As scholars, we must first ask ourselves if we care to overcome the McNamara Syndrome. If so, how can we reconcile our methods with the practice?

Share

Some Rambling Thoughts on the Qual/Quant Pseudo-Divide

Perusing Drew Conway’s excellent blog Zero Intelligence Agents in response to his comment on a previous post, I came across this post of his, reacting to Joseph Nye and Daniel Drezner’s recent bloggingheads diavlog on the theory/policy debate.

You can watch the relevant portion above, though Conway has summarized a key point:

Drezner notes that quantitative scholars tend to have a ‘imperialistic attitude’ about their work, brushing off the work of more traditional qualitative research.

To be exact, by “quantitative scholars” Drezner was referring to those who use “statistical methods and formal models” and by “traditional qualitative research” he meant specifically “more historical / deep background knowledge that’s necessary to the policymaker.” Conway goes on to concur:

In some respect I agree. As a student in a department that covets rational choice and high-tech quantitative methods, I can assure you none of my training was dedicated to learning the classics of political science philosophy. On the other hand, what is stressed here—and in many other “quant departments”—is the importance of research design. This training requires a deep appreciation of qualitative work. If we are producing relevant work, we must ask ourselves: “How does this model/analysis apply to reality? What is the story I am telling with this model/analysis?”

I’d been wanting to put in my two cents since I saw this particular bloggingheads, so I’ll just do so now. I think there are three unnecessary conflations here.

First, between qualitative or quantitative methodologies as approaches and specific methods within either of these two approaches. Drezner is comparing large-N statistical studies to historical case studies. But case study research is only one type of qualitative work – not all other types of qualitative work are any more useful for policymakers than large-N statistical studies.

Second, I see a confusion here between qualitative methods as an approach to doing social science and interpretivism as a form of theory (and for that matter, between large-N empirical studies and abstract formal modeling). In his post, Conway equates qualitative methods not with historical descriptive work, but with political theory (or as Conway puts it, political philosophy) and interpretivism. There is a wide continuum of qual methods, some much more scientifically rigorous – that is, focused on description and explanation rather than interpretation or prescription – than others. I also think that there is a similar difference between large-N statistical studies and formal modeling – one relies on data to test theories, the other relies on abstract math and logic and is largely divorced from real-world evidence.

In both cases, I think the imperialism being described above (if any) is really the imperialism of empirical science over pure theory. I think that the imperialism of quantitative methods over qualitative methods must be judged, if it exists, against only qualitative approaches that are actually designed to be scientific. Within that context, you may be surprised how much respect these scholars have for one another’s work – though, perhaps that’s just based on my good experiences collaborating and communicating with quantoids, experiences others may not share.

Third and finally, I think researchers and their methods are being conflated here. Bloggingheads.tv is perhaps most guilty by labeling this clip “quals v. quants” as if these methods are mutually exclusive and as if scholars are defined by the methods they use. (And in fact, I just noticed I did it myself in the previous paragraph with the term “quantoids.”) But most of the doctoral dissertations I see coming out today use mixed-methods – that is, some combination of case studies and statistics. And much qualitative work, including much of my own, is actually quantitative as well. It’s qualitative insofar as I’m studying text data and using grounded theory to generate analytical categories. But it’s quantitative in the sense that I convert those categories (codes) into frequency distributions that tell us something about the objective properties in the text, and in the sense that I use mathematical inter-rater reliability measures to report just how objective those properties are through inter-rater reliability measures.

Anyway, as a self-identified qualitative scholar whose work varies between interpretivism and rigorous social science studies of text (and who therefore is quite conscious of the difference), but who is also quite open to collaborating with quantitative researchers depending on the nature of the problem I’m working on, I hate to buy into a discourse that pigeonholes IR scholars as one thing or another.

Ultimately, I think the distinction Nye and Drezner are really talking about here is not methodological. Rather, it’s between those scholars capable of translating their findings (through whatever method) into language accessible to policymakers, and those who refuse to learn those skills. As I argued once before, perhaps this process of translating is “methodology” of its own that we should be incorporating into our doctoral curriculum as a discipline.

Share

© 2019 Duck of Minerva

Theme by Anders NorenUp ↑