Simple steps to promote qualitative research in journals
It happened again. After months of waiting, you finally got that “Decision” email: Rejection.
That’s not so bad, it happens to everyone. But it’s the nature of the rejection that gets to you. The reviewers (you assume fellow quals) didn’t engage with your careful use of process tracing, your intricate case selection method. They just questioned your findings, pointed out your imperfect data, and chided you for leaving out irrelevant historical details.
Basically, the reviewer refused to engage with the qualitative methods that are meant to help improve qualitative research. This, unfortunately, is a common experience. It reflects a fundamental difference between quant and qual assessments, and leaves international relations (IR) qualitative scholars their own worst enemies.
I consider myself a mixed-methods scholar, but you wouldn’t really know that from my CV. Most of my articles are completely quantitative. Of those that are not, two are typological discussions and the final one is primarily quantitative with brief case studies. It’s not that I didn’t try to publish qualitative work. My qualitative work, however, tended to either get rejected from journals or receive apathetic to hostile reactions when I presented it at conferences.
Talking with other international relations scholars, I know I’m not alone. We want to publish qualitative research and increase its prominence in IR. But qualitative work feels even more vulnerable to the arbitrariness of peer review. Given the immense pressure to publish we all face, many feel compelled to rely on quant to get by.
The self-imposed obstacles qualitative scholars face
What is going on?
One could argue that my qualitative work just isn’t that good. I obviously don’t think that’s true. So let’s move on.
Some of this has to do with differences in quantitative and qualitative research. Quantitative work aspires to SCIENCE. It attempts to measure phenomena with transparent and replicable methods. Qualitative work is more impressionistic. Qualitative methods are not meant to be as standardized as OLS regression. There are as many disagreements within qualitative methodology as there are between quant and qual. So it is inevitable assessment of qualitative work is less precise.
There is a fundamental distinction in the way scholars assess quant and qual work. It is also completely unnecessary. The point of qualitative methodology is to create a set of transparent standards to assess qualitative research.
This may be a bigger problem in IR. IR studies (especially in journals) often have to rely on messy or simplified data. We deal with broad time frames, opaque political processes, and multiple countries. This makes it difficult to be specific enough for cranky area studies experts. We tend to focus on discrete aspects of cases, instead of the overall case itself. Our studies thus leave out a lot of historical context and detail.
And this problem may matter more for certain types of research. It is not that hard to find qualitative discussions that reflect dominant understandings of international relations, especially in security studies. These benefit from either confirming “common wisdom” or drawing on one side of a debate that is well-founded in international relations. But those trying to push back on the common wisdom or expand the debate face greater scrutiny, and have their qualitative work more readily dismissed by reviewers.
The problem: quant and qual scholars review differently
But I think there is a fundamental distinction in the way scholars assess quant and qual work beyond differences in the methodologies. Quantitative reviewers tend to focus on the methods. They assess whether they were used appropriately, whether they overcame data limitations. If the work satisfies these concerns, they tend to accept the findings. Qualitative respondents tend to focus on the findings or data issues. If the findings appear strange or the data flawed they discount the cases. This has been my experience, and is the reason for the imbalance between quant and qual work in my CV.
Quals may see this distinction as a sign of higher standards. Instead, it is holding back qualitative impact on IR and discouraging junior scholars from pursuing qualitative work.
It is also completely unnecessary. The point of qualitative methodology is to create a set of transparent standards to assess the quality of research. Case selection techniques demonstrate a scholar is not cherry-picking examples. They also ensure generalizability. Methods like process tracing and counterfactuals reveal how a scholar came to their conclusions.
That is, even if you don’t agree with a qualitative study’s findings, you should be persuaded by its methods. And if you are persuaded by its methods…it should be publishable. Until qualitative scholars change their mindset when assessing qualitative work, qualitative research will never be as prominent as quantitative research.
There are a few things qual scholars can do. The big one: Take qualitative methods seriously.
What qual IR scholars can do
Given that, there are a few things we can do as conference discussants, peer reviewers, and journal editors to ensure qualitative scholars get useful feedback and viable publication prospects:
- Distinguish between trivia and relevant data: Remember that the case study you’re reading is not a case study of, for example, Pakistan or World War II. It a study on Pakistan’s support for the Taliban or disputes among the Allies. That importance piece of historical context you care so much about may not be necessary. Don’t reject the paper because the author left it out.
- Allow for imperfect data: We are unlikely to ever have perfect data in IR. There aren’t records of leaders saying “I am about to invade this country because X.” Even the best qualitative work will involve some ambiguity. Don’t ding the paper for this. Instead, look at how the author dealt with it. Did they acknowledge it? Did they address its potential impacts on their findings? Did they explain how they overcame it? I provided some recommendations in a recent article.
- Take qualitative methods seriously: This is the big one. Don’t skim that “Research Design” section and treat the case study like a stand-alone short story. What method did the author use to analyze the data? Did they implement it correctly? Did they acknowledge and reject alternative explanations? If the answer to all of these is yes, then the paper should move forward even if you don’t like the conclusion.
Any thoughts? Is this problem as bad as I claim? Is it fair to blame qualitative scholars? Any other ways we can fix it?
This is good advice. Many journal editors will tell you that research programs have a supportive referee culture, while in others reviewers tear submissions to shreds. It’s not hard to do the math on what that means for the accumulation of academic capital.
Along similar lines, Bill Wohlforth did some poking around when he took over Security Studies and concluded there was a significant gap in citation practices – that is, qualitative scholars in security studies didn’t feel the need to cite a look of cognate work in the field.