It’s happened to all of us. You get that email “Decision on Manuscript…,” open it with a bit of trepidation, just to find a (hopefully) politely worded rejection from the editor. Sometimes this is justified. Other times, however, the rejection is due to the legendary “Reviewer #2,” a cranky, ill-informed, hastily written rant against your paper that is not at all fair. The details can vary–they don’t like your theoretical approach, don’t understand the methods, are annoyed you didn’t cite them–but the result is the same: thanks to a random draw from the editor’s reviewers list you’ve got to move on.
We all seem to agree this is a problem. Peer review is finicky, and often relies on gate-keepers who can fail to objectively assess work. The pressure to publish for junior faculty and grad students is immense. And editors are over-worked and overwhelmed. Dan Nexon provided a great service recently by writing a series of posts on his experience at International Studies Quarterly. This gave a lot of insight into this often opaque process, and got me thinking about what to do with the above situation.
One solution is to improve guidance to peer reviewers. As someone who reviews just about every article on religion and terrorism (at least that’s how it feels) there’s great variation across journals. Some helpfully explain what a useful review would be, which can help standardize the sort of feedback reviewer provide. Of course, reviewers who responsibly follow directions would probably try to provide a fair review in the first place, so this approach can only go so far.
Another solution is to play the game. Instead of carefully weighing which tier journal or level of general interest is most likely to produce an acceptance, just send it out. A low tier journal could pull a hostile reviewer, while a top one could match you with a friendly reviewer. So submit (and re-submit often). Or you could try and figure out the identities of the potential Reviewer #2’s and strategically neutralize their complaints (usually through citing and engaging with their work).
Another solution is one that at first seems ridiculous (especially to the overworked editors) but hear me out: make appeals a routine part of the process.
Technically, authors can appeal decision letters. But in reality, they mostly don’t. It can irritate editors, hurting chances of future publication. Depending on whether or not reviewers see the appeal (and figure out who the author is) it can antagonize colleagues.
Nexon gave some thoughts on appeals. He says they should start with a “polite expression of concern,” to avoid seeming like an attack. He suggests editors provide substantive information in their reason for a decision, to facilitate this process. And he argued that journals should have standard appeal processes.
I want to focus on the latter one. I suspect journals don’t have standard appeal processes for the same reason some professors don’t have grade appeal processes in their syllabi; they worry it will open them up to constant appeals. Also, the more detailed their standards are, the more authors will try to “check the box” to prove they should have gotten accepted.
This is completely reasonable, but it runs up against the reality of the often-random peer review process I discussed above. Treating appeals as a dialogue between reviewers, editors and authors would make the process fairer. But it may even make it more efficient.
How could allowing for frequent appeals–multiplying the work that goes into each manuscript–improve the efficiency of the process? Because it would decrease the randomness of peer review. As it stands, authors may send off work that isn’t completely polished; since there’s a good chance it’ll be rejected with an unreasonable review, why try to perfect it? Sending it off at an earlier stage helps the author anticipate and integrate potential issues into the manuscript.
But if authors know they will have a chance to push back on problematic reviews, they’ll have more of an incentive to get the manuscript just about perfect before sending it out. That way, they’ll have a stronger case for their appeal. This will decrease the work of editors and reviewers, as they will be able to focus on substantive issues rather than glaring problems (e.g. “the author forgot to specify their methods…”).
If authors know they will have a chance to appeal unfair reviews, they will start to gain more faith in the fairness of the review process (and indeed, academia in general). Meanwhile, reviewers will gain an incentive to put more consideration into reviews; those excellent reviewers who provide tough but substantive feedback will have to deal with fewer appeals than “Reviewer 2.”
Editors will have to deliberate over whether an appeal is valid, but the information this provides may actually help them make decisions in the face of sub-standard reviews, especially on subjects they do not directly study. The last point may be particularly useful–too often a narrow definition of sound scholarship informs some of our top journals. This isn’t necessarily bias on the part of editors, just the reality of trying to assess research that is far from their area of expertise. Routine appeals can help make the case for research that is outside of the mainstream, however.
I realize this is an “easier said than done,” thing, especially since I have never edited a journal. But serious conversations need to start about the integrity and efficiency of peer review. Hopefully the appeals process is part of that.
0 Comments