Qualitative scholars in political science are used to thinking of themselves as under threat from quantitative researchers. Yet qualitative scholars’ responses to quantitative “imperialism” suggest that they misunderstand the nature of that threat. The increasing flow of data, the growing availability of computing power and easy-to-use software, and the relative ease of training new quantitative researchers make the position of qualitative scholars more precarious than they realize. Consequently, qualitative and multi-method researchers must not only stress the value of methodological pluralism but also what makes their work distinctive.
Few topics are so perennially interesting for the individual political scientist and the discipline as the Question of Method. This is quickly reduced to the simplistic debate of Quant v. Qual, framed as a battle of those who can’t count against those who can’t read. Collapsing complicated methodological positions into a single dimension obviously does violence to the philosophy of science underlying these debates. Thus, even divisions that really affect other dimensions of methodological debate, such as those that separate formal theorists and interpretivists from case-study researchers and econometricians, are lumped into this artificial dichotomy. Formal guys know math, so they must be quants, or at least close enough; interpretivists use language, ergo they are “quallys” (in the dismissive nomenclature of Internet comment boards), or at least close enough. And so elective affinities are reified into camps, among which ambitious scholars must choose.
(Incidentally, let’s not delude ourselves into thinking that mutli-method work is a via media. Outside of disciplinary panels on multi-method work, in their everyday practice, quantoids proceed according to something like a one-drop rule: if a paper contains even the slightest taint of process-tracing or case studies, then it is irremediably quallish. In this, then, those of us who identify principally as multi-method stand in relation to the qual-quant divide rather as Third Way folks stand in relation to left-liberals and to all those right of center. That is, the qualitative folks reject us as traitors, while the quant camp thinks that we are all squishes. How else to understand EITM, which is the melding of deterministic theory with stochastic modeling but which is not typically labeled “multi-method”?)
The intellectual merits of these positions have been covered better elsewhere (as in King Keohane and Verba 1994, Brady and Collier’s Rethinking Social Inquiry, and Patrick Thaddeus Jackson’s The Conduct of Inquiry in International Relations). Kathleen McNamara, a distinguished qualitative IPE scholar, argues against the possibility of an intellectual monoculture in her 2009 article on the subject. And I think that readers of the Duck are largely sympathetic to her points and to similar arguments. But even as the intellectual case for pluralism grows stronger (not least because the standards for qualitative work have gotten better), we should realize that is incontestable that quantitative training makes scholars more productive (in the simple articles/year metric) than qualitative workers.
Quantitative researchers work in a tradition that has self-consciously made the transmission of the techne of data management, of data collection, and the analysis of data vastly easier not only than its case-study, interpretivist, and formal counterparts but even than quant training a decade or more ago. By techne, I do not mean the high-concept philosophy of science. All of that is usually about as difficult and as rarefied as the qualitative or formal high-concept readings, and about as equally useful to the completion of an actual research project–which is to say, not very, except insofar as it is shaped into everyday practice and reflected in the shared norms of the average seminar table or reviewer pool. (And it takes a long time for rarefied theories to percolate. That R^2 continues to be reported as an independently meaningful statistic even 25 years after King (1986) is shocking, but the Kuhnian generational replacement has not yet so far really begun to weed out such ideological deviationists.)
No, when I talk about techne, I mean something closer to the quotidian translation of the replication movement, which is rather like the business consultant notion of “best practices.” There is a real craft to learning how to manage data, and how to write code, and how to present results, and so forth, and it is completely independent of the project on which a researcher is engaged. Indeed, it is perfectly plausible that I could take most of the thousands of lines of data-cleaning and analysis code that I’ve written in the past month for the General Social Survey and the Jennings-Niemi Youth-Parent Socialization Survey, tweak four or five percent of the code to reflect a different DV, and essentially have a new project, ready to go. (Not that it would be a good project, mind you, but going from GRASS to HOMOSEX would not be a big jump.) In real life, there would be some differences in the model, but the point is simply that standard datasets are standard. (Indeed, in principle and assuming clean data, if you had the codebook, you could even write the analysis code before the data had come in from a poll–which is surely how commercial firms work.)
There is nothing quite like that for qualitative researchers. Game theory folks come close, since they can tweak models indefinitely, but of course they then have to find data against which to test their theories (or not, as the case may be). Neither intepretivists nor case-study researchers, however, can automate the production of knowledge to the same extent that quantitative scholars can. And neither of those approaches appear to be as easily taught as quant approaches.
Indeed, the teaching of methods shows the distinction plainly enough. Gary King makes the point well: unpublished paper:
A summary of these features of quantitative methods is available by looking at how this information is taught. Across fields and universities, training usually includes sequences of courses, logically taken in order, covering mathematics, mathematical statistics, statistical modeling, data analysis and graphics, measurement, and numerous methods tuned for diverse data problems and aimed at many different inferential targets. The specific sequence of courses differ across universities and fields depending on the mathematical background expected of incoming students, the types of substantive applications, and the depth of what will be taught, but the underlying mathematical, statistical, and inferential framework is remarkably systematic and uniformly accepted. In contrast, research in qualitative methods seems closer to a grab bag of ideas than a coherent disciplinary area. As a measure of this claim, in no political science department of which we are aware are qualitative methods courses taught in a sequence, with one building on, and required by, the next. In our own department, more than a third of the senior faculty have at one time or another taught a class on some aspect of qualitative methods, none with a qualitative course as a required prerequisite.
King has grown less charitable toward qualitative work than he was in KKV. But he is on to something here: If every quant scholar has gone from the probability theory –> OLS –> MLE –> {multilevel, hazard, Bayesian, … } sequence, what is the corresponding path for a “qually”? What could such a path even look like? And who would teach it? What books would they use? There is no equivalent of, say, Long and Freese for the qualitative researcher.
The problem, then, is that it is comparatively easy to make a competent quant researcher. But it is very hard to train up a great qualitative one. Brad DeLong put the problem plainly in his obituary of J.K. Galbraith:
Just what a “Galbraithian” economist would do, however, is not clear. For Galbraith, there is no single market failure, no single serpent in the Eden of perfect competition. He starts from the ground and works up: What are the major forces and institutions in a given economy, and how do they interact? A graduate student cannot be taught to follow in Galbraith’s footsteps. The only advice: Be supremely witty. Write very well. Read very widely. And master a terrifying amount of institutional detail.
This is not, strictly, a qual problem. Something similar happened with Feynman, who left no major students either (although note that this failure is regarded as exceptional). And there are a great many top-rank qualitative professors who have grown their own “trees” of students. But the distinction is that the qualitative apprenticeship model cannot scale, whereas you can easily imagine a very successful large-lecture approach to mastering the fundamental architecture of quant approaches or even a distance-learning class.
This is among the reasons I think that the Qual v Quant battle is being fought on terms that are often poorly chosen, both from the point of view of the qualitative researcher and also from the discipline. Quant researchers will simply be more productive than quals, and that differential will continue to widen. (This is a matter of differential rates of growth; quals are surely more productive now than they were, and their productivity growth will accelerate as they adopt more computer-driven workflows, as well. But there is no comparison between the way in which computing power increases have affected quallys and the way they have made it possible for even a Dummkopf like me to fit a practically infinite number of logit models in a day.) This makes revisions easier, by the way: a quant guy with domesticated datasets can redo a project in a day (unless his datasets are huge) but the qual guy will have to spend that much time pulling books off the shelves.
The qual-quant battles are fought over the desirability of the balance between the two fields. And yet the more important point has to do with the viability, or perhaps the “sustainability,” of qualitative work in a world in which we might reasonably expect quants to generate three to five times as many papers in a given year as a qual guy. Over time, we should expect this to lead to first a gradual erosion of quallies’ population, followed by a sudden collapse.
I want to make plain that I think this would be a bad thing for political science. The point of the DeLong piece is that a discipline without Galbraiths is a poorer one, and I think the Galbraiths who have some methods training would be much better than those who simply mastered lots and lots of facts. But a naive interpretation of productivity ratios by university administrators and funding agencies will likely lead to qualitative work’s extinction within political science.
0 Comments
Trackbacks/Pingbacks