If you have been living under a rock as I apparently have, then like me you may be unaware of the DA-RT controversy that is brewing in the American Political Science Association.* Turns out that some of our colleagues have been pushing for some time to write a new set of rules for qualitative scholarship that, among other things will require “cited data [be] available at the time of publication through a trusted digital repository” [This is from the Journal Editor’s Transparency Statement, which is what is being implemented Jan. 15]. The goal I gather is to enhance transparency and reproducibility. A number of journal editors have signed on, although Jeffrey Issac, editor at Perspectives on Politics, has refused to sign onto the DA-RT agenda.

There are a number of reasons to doubt that the DA-RT agenda will solve whatever problem it aims to address. Many of them are detailed in a petition to delay implementation (which I have signed) of the DA-RT protocol, currently set for January 15, 2016. To explore how posting data is more or less an optical solution that does little to enhance transparency or reproducibility, I want to run through a hypothetical scenario for interviews, arguably the most prone of qualitative methods to suspicion.

Regardless of the subject, IRBs nearly always insist on anonymity of the interviewees. Which means that in addition to scrubbing names and identifying markers, recordings of interviews cannot be made public (if they even exist, which many IRB decisions preclude). Therein lies the central problem—meaningful transparency is impossible, and as a result reproducibility as DA-RT envisions it is deeply impaired. Even if someone were interested in reproducing a study relying on interviews, doing so would be hindered by the fact that s/he would not be able to interview the same people as the person(s) who undertook the study (this neglects of course that the reproduction interviews could not be collected at the same time, introducing the possibility of contingency effects). In this very simple and nearly universal IRB requirement, there is fundamentally nothing to stop a nefarious, ne’er-do-well academic poser from completely fabricating the interview data that gets posted to the digital database DA-RT requires because there is no way to verify it (e.g. call up the person who gave the interview and ask if they really said that?!).

The problem for DA-RT in this scenario gets worse though. One might argue that even if audio cannot be posted, a transcription of the interview could be made available. Even if such an transcription is possible—again IRB may mandate that recordings not be made or the interviewer may not make recordings in hopes of eliciting as honest a response as possible (would DA-RT override the judgment of scholars in the field?)—depending on the subject, it may be impossible to post a complete transcript of an interview because doing so would provide enough contextual information to identify the interviewee. If the scholar has only handwritten or digital notes, the full interview may still be too revealing and even if it is not, the notes are not an unadulterated record of the interview, but rather the scholar’s distillation of the interview.

Which means we are left where we started, either a distilled record of the interview which may or may not be objectively accurate and is certainly not transparent or technically reproducible or pieces of an interview that the scholar analyzes in the context of the study. Sure, we can post these to a digital repository, but transparency and reproduction are not enhanced either because there really is not unadulterated data or unrevealed information in the repository. So what’s the point?

Practical points aside, there are broader methodological and disciplinary concerns. Political science needs greater intellectual diversity, not less. As Isaacs points out, the DA-RT protocol is suggestive of a epistemological and methodological disciplining move that will force scholars ever closer into a neopositivist straightjacket.

It also encourages us to continue to believe that quantitative approaches have cornered the market on transparency and reproducibility. But they haven’t. What do I mean? Transparency and reproducibility in quantitative studies largely address the process of analysis. If you want to run a regression on some data, you have to show your work. Other people run the same data on their computers making the same decisions and see if it checks out. Questions of transparency and reproduction lie in the analytical decisions, not in the data itself. No one[**] checks the decisions Polity or Correlates of War made in their coding of each datapoint, much less the data they were coding, much less that whiz-bang novel dataset that nobody but you is using. No one assembles a new Polity to see if they get the same numbers as existing Polity. And that is what true reproducibility would require. But we don’t see new Polity datasets because we aren’t really interested in data reproducibility, but rather analytical reproducibility—the weighting decisions, whether a spline was the appropriate mode of interpolation, whether there was a lag factor, and so on.***  But unlike quantitative methods, qualitative methods put the [interpretive] analytical process right up front, in the written piece. [Because the interpretive analytical move is usually made in the body of the text rather than in a data set or an unpublished regression algorithm] Qualitative scholarship already has analytical transparency and, insofar as it is possible, reproducibility.

I admit that the issues at play are more complicated than I represent here, and that my logic may be a bit slapdash as I try to write this and pack for ISA Northeast. For those interested in a nuanced and useful discussion of the issues, the Spring 2015 Qualitative and Multimethod Research newsletter is a good start.[****]

 

N.B. I have made a few minor edits to clarify points.  Where I have done so the edits are in []

*Many thanks to Jelena Subotic and Kai Thaler for awakening me to this matter.

[**A good discussion on Twitter has made the case that I overstate this point.  I accept that.]

***On the challenges of analytical reproduction, Nature has an interesting commentary.

[****In the interest of fairness, perspectives disagreeing with me can be found at Tom Pepinsky‘s  and Thomas J. Leeper‘s blogs.]