Tag: qualitative data analysis

Visualizing the Correlates of Global Issue Creation

My big work-related task this month is to pull together my focus group findings into something approximating a theoretically relevant conference paper. I like to start with visualizations.

This chart is derived from a code scheme we developed to categorize human security practitioners’ responses in focus groups to the question: what factors facilitate or inhibit the emergence of new issues on global policy agenda? Answers fell into five broad buckets, and this is an overview of the contents of the various buckets and how they relate to one another and to the extent literature on advocacy networks. Of most interest to my project is the highly central yet largely under-theorized category “network effects.”

Reactions on either the chart, the theoretical argument embodied here, or the data very welcome.

Also, a bleg: do any readers know of user-friendly visualization software to make a graph like this interactive? I’d like viewers to be able to move their pointer over a code-name and see the code definition, and over a code bucket and see a frequency distribution of associated codes.

Share

Unpacking the Anti-Drone Debate

My student Lina Shaikhouni and I have a new Foreign Policy piece in which we make “The Case Against the Case Against Drones,” to paraphrase Stephanie.

Therein, we argue that although there are many good reasons why drones shouldn’t be used as they are being used, or where they are being used, the arguments against drones per se are often based on misconceptions or assumptions for which we don’t have good data.

In particular, we point out:

1) drones are not killer robots
2) there’s no evidence that “video game” warfare makes war more likely
3) drones don’t violate humanitarian law
3) we don’t have good data on civilian casualties

We also argue that the focus on drones per se is a distraction from what are actually more profound and wider issues:

… Those who oppose the way drones are used should shift focus to one of the big normative problems touched by the drone issue: whether truly autonomous weapons should be permitted in combat, how to accurately track the civilian cost of different weapons platforms, and whether targeted killings — by drones or SEAL teams — are lawful means to combat global terrorism… Focusing on the drones themselves misses this bigger picture.

What you see above is a visualization of the data on which our argument is based, created with help from Dan Glaun (click here for clearer picture). We ran a search for “drone warfare” on DailyOpEd.com, which indexes opinion articles and letters to the editor, and identified 29 hits that dealt with drones specifically (as opposed to just mentioning them in the context of arguments about other things). We loaded these documents into a tool called DiscoverText and used it to render a tag cloud of the most frequently used words across the set of op-eds.

We then drilled into the text itself, studied it closely, and used a consensus coding scheme to tag specific instances of the types of misconceptions we identified across all the op-eds. Any statistical or numerical references in the article are based on this coding. While we don’t single out any author in particular as intending to make any of these particular arguments, the aggregation of passages across many op-eds adds up to a pattern in the broader debate. Specific quotations emblematic of the misconceptions we noticed have been high-lighted in the info-graphic above.

Share

Of Quals and Quants

Qualitative scholars in political science are used to thinking of themselves as under threat from quantitative researchers. Yet qualitative scholars’ responses to quantitative “imperialism” suggest that they misunderstand the nature of that threat. The increasing flow of data, the growing availability of computing power and easy-to-use software, and the relative ease of training new quantitative researchers make the position of qualitative scholars more precarious than they realize. Consequently, qualitative and multi-method researchers must not only stress the value of methodological pluralism but also what makes their work distinctive.

Few topics are so perennially interesting for the individual political scientist and the discipline as the Question of Method. This is quickly reduced to the simplistic debate of Quant v. Qual, framed as a battle of those who can’t count against those who can’t read. Collapsing complicated methodological positions into a single dimension obviously does violence to the philosophy of science underlying these debates. Thus, even divisions that really affect other dimensions of methodological debate, such as those that separate formal theorists and interpretivists from case-study researchers and econometricians, are lumped into this artificial dichotomy. Formal guys know math, so they must be quants, or at least close enough; interpretivists use language, ergo they are “quallys” (in the dismissive nomenclature of Internet comment boards), or at least close enough. And so elective affinities are reified into camps, among which ambitious scholars must choose.

(Incidentally, let’s not delude ourselves into thinking that mutli-method work is a via media. Outside of disciplinary panels on multi-method work, in their everyday practice, quantoids proceed according to something like a one-drop rule: if a paper contains even the slightest taint of process-tracing or case studies, then it is irremediably quallish. In this, then, those of us who identify principally as multi-method stand in relation to the qual-quant divide rather as Third Way folks stand in relation to left-liberals and to all those right of center. That is, the qualitative folks reject us as traitors, while the quant camp thinks that we are all squishes. How else to understand EITM, which is the melding of deterministic theory with stochastic modeling but which is not typically labeled “multi-method”?)

The intellectual merits of these positions have been covered better elsewhere (as in King Keohane and Verba 1994, Brady and Collier’s Rethinking Social Inquiry, and Patrick Thaddeus Jackson’s The Conduct of Inquiry in International Relations). Kathleen McNamara, a distinguished qualitative IPE scholar, argues against the possibility of an intellectual monoculture in her 2009 article on the subject. And I think that readers of the Duck are largely sympathetic to her points and to similar arguments. But even as the intellectual case for pluralism grows stronger (not least because the standards for qualitative work have gotten better), we should realize that is incontestable that quantitative training makes scholars more productive (in the simple articles/year metric) than qualitative workers.

Quantitative researchers work in a tradition that has self-consciously made the transmission of the techne of data management, of data collection, and the analysis of data vastly easier not only than its case-study, interpretivist, and formal counterparts but even than quant training a decade or more ago. By techne, I do not mean the high-concept philosophy of science. All of that is usually about as difficult and as rarefied as the qualitative or formal high-concept readings, and about as equally useful to the completion of an actual research project–which is to say, not very, except insofar as it is shaped into everyday practice and reflected in the shared norms of the average seminar table or reviewer pool. (And it takes a long time for rarefied theories to percolate. That R^2 continues to be reported as an independently meaningful statistic even 25 years after King (1986) is shocking, but the Kuhnian generational replacement has not yet so far really begun to weed out such ideological deviationists.)

No, when I talk about techne, I mean something closer to the quotidian translation of the replication movement, which is rather like the business consultant notion of “best practices.” There is a real craft to learning how to manage data, and how to write code, and how to present results, and so forth, and it is completely independent of the project on which a researcher is engaged. Indeed, it is perfectly plausible that I could take most of the thousands of lines of data-cleaning and analysis code that I’ve written in the past month for the General Social Survey and the Jennings-Niemi Youth-Parent Socialization Survey, tweak four or five percent of the code to reflect a different DV, and essentially have a new project, ready to go. (Not that it would be a good project, mind you, but going from GRASS to HOMOSEX would not be a big jump.) In real life, there would be some differences in the model, but the point is simply that standard datasets are standard. (Indeed, in principle and assuming clean data, if you had the codebook, you could even write the analysis code before the data had come in from a poll–which is surely how commercial firms work.)

There is nothing quite like that for qualitative researchers. Game theory folks come close, since they can tweak models indefinitely, but of course they then have to find data against which to test their theories (or not, as the case may be). Neither intepretivists nor case-study researchers, however, can automate the production of knowledge to the same extent that quantitative scholars can. And neither of those approaches appear to be as easily taught as quant approaches.

Indeed, the teaching of methods shows the distinction plainly enough. Gary King makes the point well: unpublished paper:

A summary of these features of quantitative methods is available by looking at how this information is taught. Across fields and universities, training usually includes sequences of courses, logically taken in order, covering mathematics, mathematical statistics, statistical modeling, data analysis and graphics, measurement, and numerous methods tuned for diverse data problems and aimed at many different inferential targets. The specific sequence of courses differ across universities and fields depending on the mathematical background expected of incoming students, the types of substantive applications, and the depth of what will be taught, but the underlying mathematical, statistical, and inferential framework is remarkably systematic and uniformly accepted. In contrast, research in qualitative methods seems closer to a grab bag of ideas than a coherent disciplinary area. As a measure of this claim, in no political science department of which we are aware are qualitative methods courses taught in a sequence, with one building on, and required by, the next. In our own department, more than a third of the senior faculty have at one time or another taught a class on some aspect of qualitative methods, none with a qualitative course as a required prerequisite.

King has grown less charitable toward qualitative work than he was in KKV. But he is on to something here: If every quant scholar has gone from the probability theory –> OLS –> MLE –> {multilevel, hazard, Bayesian, … } sequence, what is the corresponding path for a “qually”? What could such a path even look like? And who would teach it? What books would they use? There is no equivalent of, say, Long and Freese for the qualitative researcher.

The problem, then, is that it is comparatively easy to make a competent quant researcher. But it is very hard to train up a great qualitative one. Brad DeLong put the problem plainly in his obituary of J.K. Galbraith:

Just what a “Galbraithian” economist would do, however, is not clear. For Galbraith, there is no single market failure, no single serpent in the Eden of perfect competition. He starts from the ground and works up: What are the major forces and institutions in a given economy, and how do they interact? A graduate student cannot be taught to follow in Galbraith’s footsteps. The only advice: Be supremely witty. Write very well. Read very widely. And master a terrifying amount of institutional detail.

This is not, strictly, a qual problem. Something similar happened with Feynman, who left no major students either (although note that this failure is regarded as exceptional). And there are a great many top-rank qualitative professors who have grown their own “trees” of students. But the distinction is that the qualitative apprenticeship model cannot scale, whereas you can easily imagine a very successful large-lecture approach to mastering the fundamental architecture of quant approaches or even a distance-learning class.

This is among the reasons I think that the Qual v Quant battle is being fought on terms that are often poorly chosen, both from the point of view of the qualitative researcher and also from the discipline. Quant researchers will simply be more productive than quals, and that differential will continue to widen. (This is a matter of differential rates of growth; quals are surely more productive now than they were, and their productivity growth will accelerate as they adopt more computer-driven workflows, as well. But there is no comparison between the way in which computing power increases have affected quallys and the way they have made it possible for even a Dummkopf like me to fit a practically infinite number of logit models in a day.) This makes revisions easier, by the way: a quant guy with domesticated datasets can redo a project in a day (unless his datasets are huge) but the qual guy will have to spend that much time pulling books off the shelves.

The qual-quant battles are fought over the desirability of the balance between the two fields. And yet the more important point has to do with the viability, or perhaps the “sustainability,” of qualitative work in a world in which we might reasonably expect quants to generate three to five times as many papers in a given year as a qual guy. Over time, we should expect this to lead to first a gradual erosion of quallies’ population, followed by a sudden collapse.

I want to make plain that I think this would be a bad thing for political science. The point of the DeLong piece is that a discipline without Galbraiths is a poorer one, and I think the Galbraiths who have some methods training would be much better than those who simply mastered lots and lots of facts. But a naive interpretation of productivity ratios by university administrators and funding agencies will likely lead to qualitative work’s extinction within political science.

Share

Could Simple Automated Tools Help Wikileaks Protect Its Afghan Sources?

Julian Assange has a problem. When pressed by human rights organizations to redact any current or future published documents because they too feared the effects on Afghan civilians, he reportedly replied that he had no time to do so and “issued a tart challenge for the human rights organizations themselves to help with ‘the massive task of removing names from thousands of documents.'”

Leaving aside his alleged claims about the moral responsibility of human rights groups for his own errors, the charitable way to think about his reaction is that Assange wants to do the right thing but simply doesn’t have the capacity. Indeed, in a recent tweet he implored his followers to suggest ideas:

Need $700k for our next harm-minimization review… What to do?

Fair enough. Here’s an idea: how about using information technology?

As my husband Household Chief Technology Officer pointed out over coffee this morning, what Assange is essentially in possession of is a large quantity of text data. There are many qualitative data analysis applications that allow users to easily sift through such data in search of specific discursive properties – I use one myself when I analyze interviews, focus groups or web content. Named entity recognition software easily allows users to identify all names or places in large quantities of text. Open-source variants like AFNER are available.

Corporations and governments already controversially use such tools for data-mining, to search for connections between names and places in large quantities of text. Could they not be equally leveraged in the service of privacy and confidentiality? How hard or costly would it really be to use such tools to identify and then redact all names in a set of text automatically by computer or to have a human being (or team of beings with a clear-cut coding scheme) go through the entire dataset with keystrokes and choose what should be removed or blacked out?

For me, it would be hard, unless someone handed me a software package that already blended these elements. But that’s primarily because I’m not a computer programmer. Julian Assange is.

Questions for readers: if you understand software design and available OTS or open-source applications better than I do, how far-fetched is it to solve Wikileaks’ redaction problem in this way? Am I being daftly optimistic here? Or, do you have other ideas in response to Mr. Assange’s query? Comment away.

[cross-posted at Lawyers, Guns and Money]

Share

© 2020 Duck of Minerva

Theme by Anders NorenUp ↑