This is a guest post by Peter S. Henne. Peter is a doctoral candidate at Georgetown University. He formerly worked as a national security consultant. His research focuses on terrorism and religious conflict; he has also written on the role of faith in US foreign policy. During 2012-2013 he will be a fellow at the Miller Center at the University of Virginia.
So I’ve reviewed several manuscripts for journals in the past year, probably the result of always saying yes. Now, don’t get me wrong, I’m glad to do it, and actually enjoy it—at least for now. But I’ve had some concerns about the standards by which I make a decision. Namely, what constitutes a publishable paper, what necessitates revise and resubmit, and when is a rejection fair? I’d like to think it’s more than whether or not I like the paper/it supports my research/it cites my research.
But as a dutiful ABD, I’ve naturally thought through this. And as an aspiring publicly-engaged academic, I’ve presented below some of the questions with which I struggle, and tentative answers, for the consideration of readers who’ve been reviewing manuscripts much longer than I have. Finally, as someone who likes getting a lot of comments on blog posts, I’ve framed them as questions…
Without further ado:
Is there an objective standard for “so what?”
Next to definitions, the most annoying type of question one can be asked about a study is “so what?” Unlike debating definitions, however, this is usually pretty important. But how do we know if a manuscript passes the test?
This, in my opinion, is where a lot of the arbitrariness of the process comes in. One scholar’s “so what?” is another’s “brilliant!” I think the ideal is a rigorous study with far-ranging empirical and theoretical implications. Few articles achieve this, however.
Maybe two out of three? A rigorous study with immediate theoretical significance, even if it validates common empirical knowledge, could be published. As could a narrowly-focused study that reveals novel facts.
And what about the manuscripts that present an original dataset, which are, as Mugatu would say, so hot right now (for good reason, in my opinion)? Does testing existing arguments with new data merit publication? What if the data verifies existing studies, but does so in a more focused or transparent manner? Or does this run the risk of producing lots of studies on the same topic that can’t really be compared?
When is a methodological flaw a disqualifier?
Often an article has some glaring methodological flaw—inappropriate model, problematic data collection process, obviously omitted variable. This might be more of an issue for quant than qual scholars. But qualitative studies may run into this too, such as a case study that depends solely on one school of historical thought.
What is the reviewer to do? Does this represent sloppiness, poor methodological training, or an attempt to hide problematic findings? I’ve been inclined to give such cases a revise and resubmit, as no one can think of every counterargument, and sometimes a flaw to me is actually a methodological debate to others. But I can see an argument for a major methodological issue resulting in a rejection.
Should I take into account the prestige of the journal?
This is minor, but is something I’ve thought about. When I’m reviewing for a very selective journal, should I be harder on manuscripts than when I’m reviewing for lower-tier journals? In a way, it seems to make sense, but I usually hesitate. In my opinion, there’s no reason why a specialized journal should have lower standards than a discipline-wide one, or methodologically problematic studies should ever get a pass.
Is it fair to impose my sub-field’s standards on interdisciplinary journals?
As someone who works on an interdisciplinary topic, this has come up a bit, although I realize it gets into philosophy of science that’s a bit above my paygrade. For example, excellent anthropology could be poor political science. And brilliant social science may be oversimplified history. If I’m reviewing a paper in another discipline, is it wrong to criticize it for lacking scientistic hypotheses? Or should a good study be good in every discipline?
Should theory-free inductive studies be published in scholarly journals?
I’ve encountered this more in quantitative than qualitative studies, but again it could be present in both. Basically, it’s a manuscript that says “what are the trends and dynamics in TOPIC during TIMERANGE?” It’s useful to know such things, but would this be journalism or policy analysis, not social science?
That wasn’t meant to come off as obnoxious as it did.
Can I tell them to cite me?
Just kidding. But seriously, can I give the actual citation? Can I give them a hint (journal, year?). Kidding. Kind of.
How extensive should the literature review be?
I get why it would be annoying to wade through lists of citations in every article in a journal. At the same time, it’s equally annoying to read a manuscript that seems to believe it is the first to ever study a topic. Should authors include the basic theoretical debates, or the most up-to-date research on the topic?
I incline towards the latter. For example, if every article on religion and conflict only cites Huntington and Said, the research program runs the risk of getting stuck in circular arguments or generating volumes of studies without really accumulating knowledge.
Editor’s note: this adorably utopian diagram of the peer-review process comes from Understanding Science: How Science Really Works.
Thanks for this post.
Just one quick comment on the question of lit. reviews. I tend to think — but this is just my own view — that the depth of the lit review should be proportional to the size of the claim. So if somone’s saying ‘no one since francis bacon has ever claimed X’ that’s a different kind of documentary burden than ‘in narrow sub-literature X a clear and ongoing problem exists surrounding the meaning of concept Y/ establishing a valid correlation between A and B’
thanks. I think that’s right. This has been most irksome to me when it’s a grand claim about a narrow question, e.g. a study of Islamist parties in Egypt that argues no one has really looked into Islam and democracy.
Great set of questions. I share most if not all of them. I think, though, the fact that we have these questions says something (unpleasant) about the process and the vagaries of it.
thanks. Although if multiple reviewers are struggling with these questions it shows many are at least trying to be fair in their reviews, maybe. Hopefully.
Yeah, that diagram seems to have left out the stages of submitter rejection-induced depression, peer reviewer ‘I-can’t-believe-that-made-it-past-the-editorial-board’ disbelief, and general alcoholism. Strange how those steps didn’t make it…
I’m going to write a post on the Duck responding to this, from my perspective, on Monday. These are some good thoughts in this post.
Judging by the number of papers that I have reviewed/know have been reviewed in ‘specialist’ journals, but which then appear in ‘disciplinary’ journals, I would suggest that ‘specialist’ journals often have higher standards (or better referees), than ‘broader’ disciplinary journals, not the other way round as you suggest!
One other comment – speaking as a past editor – when I asked a ‘new career scholar’ (student or otherwise) to review a paper, I would always try to match them with someone more experienced, or who has reviewed for me before, and often with two other people. So, while I think it is completely right that you are taking the process seriously and reflecting deeply on it, I would also say ‘don’t worry too much’ – the editors will read your comments and take them on board with the others. So, think of these early reviewing opportunities as an apprenticeship, and a learning curve.