I understand that there’s been some recent blog-chatter on one of my favorite hobbyhorses, peer review in Political Science and International Relations. John Sides gets all ‘ruh roh’ because of an decades-old old, but scary, experiment that shows pretty much what every other study of peer-review shows:
Then, perhaps coincidentally, Steve Walt writes a longish post on “academic rigor” and peer review. Walt’s sorta right and sorta wrong, so I must write something of my own,*Â despite the guarantee of repetition.
What does Walt get right?
Third, peer review is probably overvalued because reviewers’ comments are often less than helpful and rarely decisive. By the time most articles are submitted for publication, they’ve usually been presented at academic seminars and have gone through multiple drafts in response to suggestions from the authors’ friends and colleagues. I’ve occasionally gotten useful suggestions from an anonymous reviewer’s report, but I’d say that more than half the comments I’ve received over the years were of no value at all and I simply ignored them. Indeed, a dirty little secret is that a lot of “peer reviews” are no more than a couple of cursory paragraphs along with a recommendation to publish, reject, or revise and resubmit. If that’s the reality of the review process, then why do we fetishize publication in “peer-reviewed” journals as much as we do? In other words, knowing that something got published in the American Political Science Review,World Politics, International Organization, or International Security doesn’t tell you very much about its real value. You have to read it for yourself to make a firm judgment [emphasis added].
The abysmal quality of a large percentage of peer reviews is an open secret in the field. Part of the problem is structural: as the field has placed greater and greater weight on the publication of peer-reviewed articles (in leading journals) for hiring and promotion, scholars have followed the rational course of increasing the number of articles that they submit for review. Indeed, this has been going on for decades. But individual scholars can proliferate their submissions much more easily than the field can generate more qualified reviewers. The math is simply dreadful: the effectiveness of peer review diminishes extremely rapidly as its importance increases.
But a lot of the problem is of our own making. Many scholars refuse to review manuscripts. For example, in 2012 about 65% of the scholars contacted by International Studies Quarterly eventually submitted a review (PDF). Reviewers often, as Walt notes, put little obvious effort into this aspect of their craft. I’ve noticed that reviewers almost never engage in an even cursory check of referenced material to ensure that an author accurately represents sources (and nobody really cares, anyway). There’s plenty of blame to go around, of course. Lack of transparency when it comes to the quantitative data used in manuscripts doesn’t exactly ensure rigorous peer review. Inaccessible primary-source material creates similar problems in qualitative work.
Moreover, Walt notes that he often ignores reviews. And given the quality of most reviews, combined with the stochasticity of the process and the number of journals out there, this makes sense. But it also diminishes the incentives for a reviewer to put in the effort. I once wrote a ~2,500 (sympathetic) review of a manuscript. Along the way I pointed out that the author was making claims to novelty that didn’t make sense, as some of the works cited advanced similar arguments. I recently re-reviewed the same piece for a different journal, and the author hadn’t even made the small effort required to correct this problem. So, yeah, hard to feel like putting in that much intellectual equity into the process.
We’ve been trying to figure out how to address these issues when we take over ISQ. We think that tougher screening of manuscripts might help a little bit — fewer pieces going out for review means less burden on reviewers — and I’m considering promulgating some formal guidelines for reviewers. We also hope that more extensive data collection might at least shed some light on the process. But this is pretty weak tea.
Anyway, I part company with one of Walt’s conclusions:
 I am not suggesting that academia discard peer review and discourage scholars from publishing in prestigious journals. Rather, I’m suggesting that the social sciences would be more useful andmore rigorous if members of these disciplines adopted a less hidebound approach to the merits of different types of publication. “Should it really be the case,” Bruce Jentleson correctly asks, “that a book with a major university press and an article or two in [a refereed journal] … can almost seal the deal on tenure, but books with even major commercial houses count so much less and articles in journals such as Foreign Affairs often count little if at all?”
There are good reasons to discount publications in outlets such as Foreign Affairs that have nothing to do with the unreliability of peer review.
First, what makes something scholarship isn’t the peer-review process, but the mode, style, and form of argument. Foreign Affairs pieces seldom, if ever, reflect the norms, standards, and purposes of writing in the scholarly vocation. And they aren’t supposed to. By design. We can argue over whether this makes them better or worse — and I would say “just different” — but I don’t think that we can argue that they’re the same kind of work.
Second, who gets to publish in  outlets such as Foreign Affairs isn’t precisely a function of position in specific elite networks, but it is heavily inflected by relations of friendship, influence, and patronage. Now, academic publishing in International Relations is far from a strict meritocracy. Some journals are arguably even clubbier than their non-academic counterparts. But we almost certainly overvalue those journals, and we should certainly not compound our mistake by assigning more value to outlets whose editorial decisions are guided by concerns, norms, and incentives very different from those we aspire to as scholars.
So, yes, we should wean ourselves from the organized hypocrisy surrounding peer review. But that doesn’t mean granting me promotion for blogging, nor granting someone tenure for getting some pieces into Foreign Policy. Those activities amount to “service,” and should be treated as such.
What else? Nothing for now. I’m late on a review.
——————————–
In my experience as editor of an association journal, EVERYTHING has to go out for review. Otherwise authors whine about bias on the part of the editor(s). The process is so politicized that I had senior ‘colleagues’ at august institutions – meaning for them little rides on placing any given piece in any given venue – lying whole cloth to governing bodies of the association about what transpired when I reviewed their piece. (And the is simple individual venality, not the ‘political’ sort of objections leveled by perestroikans in the recent past.) And the associations are so gun shy that they simply take such complaints at face value.
Hence AJPS/JoP/APSR are each now reviewing 1000+ manuscripts a year. Refereeing is a non-renewable resource. And the big general journals (to say nothing of the proliferation of smaller niche sub-sub-field journals) are clear cutting. Authors submit at the top of the food chain and work their way ‘down’, often not revising at all in light of comments they receive. Sometimes the comments are crappy. But often they are useful and intended to be so. And often the author doesn’t even correct typos that readers note. Their calculation is ‘what the heck, why not buy this lottery ticket’ – maybe I get shot down, but there is some small chance I get lucky and place a piece in a top journal.
When, as happens often, I receive a manuscript to referee that I already have read for another journal and it is clear that the author has done nothing to alter the piece in response to earlier rounds of review elsewhere, I simply email the editor, attach my earlier report, and ask her/him to ask the author to stop wasting my time.
Wow. That sounds awful. I hope it isn’t a preview of our experience. At least we’ve been explicit about increasing the desk-rejection rate.
But there’s a secondary problem here: given that wide variation in peer-review quality, it can be hard for some scholars (and grad students in particular) to discern what’s a “legitimate” objection and what it isn’t. So I’m a bit more prepared to do a “new” review when I get a manuscript from a different journal.
In my experience, as a graduate student who recently went through the process of submitting and then revising a piece for a journal (although in history), this definitely rings true. It’s hard parsing the comments and suggestions received by reviewers, and especially to prioritize them when they conflict. In my own (limited) experience, the filter of the journal editor(s) and the comment they attach to reviewer reports are absolutely critical for early-career submitters.
Dan, why the gratuitous snark at me (“all ruh-roh”)? You suggest that the study I posted about has been done a million times, but you don’t cite any study that actually did what that study did: resubmit the same articles to the same journal with a year or two of publication. As I acknowledged in my post, I didn’t initially realize the study was “decades-old,” but why does that matter? You seem to believe — and I agree — that peer review is a rickety enterprise these days. Unless that study’s findings are somehow completely irrelevant in 2013, it only helps your cause.
I wasn’t snarking at you, or about you. I’m ‘all “ruh roh”‘ all the time, and consider it an appropriate reaction along the lines of “D’oh!” and “#faceplam.”
As for the lack of links. I thought I was providing them by linking back to the Duck archive on peer review. But we’re missing a lot of stuff and there’s been serious link rot (including to the key bibliographic resource). In brief, medical and natural scientists have conducted experiments that suggest stochasticity in peer review; studies on who challenges reviews (and succeeds at doing so) suggest pedigree and experience with the journal are decisive; we all know about p-value bias and confirmation bias in publishing; etc. JSTOR and google are great resources. A number of journals, including JAMA, have had special issues on the subject; a number of associations have conducted reviews (see, e.g., citations at https://www.publishingresearch.net/documents/PRCsummary4Warefinal.pdf). Oddly enough, one of the articles I was thinking of is precisely the one you excerpted.
On a good day, XKCD is still funny: https://xkcd.com/1191/
But most days aren’t so good, it seems…. :-(
On ignoring comments:
So sometimes the “reviewer” seems not to have read the submitted article in question at all. As one friend recently put it: “Waiting six months for a journal rejection is always fun, particularly when you get helpful comments like “there is no hypothesis” (when in fact one is clearly articulated on page 1 of the paper) and “you should instead use a different dependent variable” (which turns out to be precisely the dependent variable that you actually used).”
Second, often the suggestions provided on what to do with the paper are like those your committee gives you in grad school — ideas on what they would do if it was THEIR project, not on what to do to improve YOUR project. So you ignore them, because they just turn out to be non-relevant. Given the minuscule length of many articles these days (6000-8000 words, v. 10K-12K some years ago), there’s never enough room to defend against every critique, good or ill, of a reviewer, if one disagrees about its validity, much less to include every citation that reviewers seem to demand be included (esp. when ref lists are counted against one’s word count).
One another note, if peer review is as broken as the above and linked articles make it seem, then how does “service” writing really differ from “research” writing, except in time frame? If there’s little quality control provided by peer review, then really it just slows the process down.
Finally, an admission. Recently I was asked to review a paper for a high-profile journal on exactly the same topic for which I had received a “reject” a month before. I explained to the editor that for that reason (received a reject, same topic, ergo I was unlikely to be anything like neutral regarding the fate of said paper). Journal editor seemed annoyed by honesty. Fine. I’ll just reject it next time with cursory, unrelated comments. (Not really, but is this what s/he really wanted?)
These are all good points. Two major reactions: (1) when I suggest that there’s plenty of blame to go around, I do not mean to absolve anyone — I even point out that ignoring reviews is rational given their modal quality; (2) I’m troubled by your final point — why should you not do your best to be fair under those circumstances? Golden rule and all that.
Because I doubt that I could actually be fair under the circumstances.
Fair enough. I’m sorry the editor didn’t appreciate your candor. But perhaps that speaks to the difficulty of getting reviewers….
It may well be difficult. But this was one of the top 3 IR journals, so in theory there should have been a deepish pool. I tried to be very polite in my justification, and I expressed a willingness to read other Rticles in the future when the circumstances were more conducive to fairness.
This is excellent. Your second point is the critical one in my experience. It is MY paper, not yours. So telling me to turn the paper upside down/inside out so that it fits your idea of what is good in this area is exactly the kind of preposterous academic arrogance that makes our field so rigidly (however unspoken) neo-feudal.
Your other point on the word-length and letting some things go is also very good, particularly when it comes to method exposition. When I review, I am often very explicit in saying this or that could be argued more, but within the constraints of an article, it’s probably ok. Ultimately, the point of review, IMO, is to help the submitter to get to her substantive political argument, which is the whole reason we are in this field. But if my reviewers had their way, half my papers would be justifying the research design. This eats up room for theoretical and data exposition, but this matters less, probably because, as Dan says, no one actually checks references, and the author likely knows more about the details of something they spend a lot of time on than you. Hence, we just harp on what we know – method, which we all spend so much time reading now (at least I feel like I have to).
This methodological overkill is also a big reason why, per Walt, unreadability plagues our discipline, and why outsiders, including our funders and all those suspicious Republicans, find us so infuriating.
The reason we harp on method, of course, is that we have no idea what our epistemology is, which is generally a prior step to choosing method. In that desire to become more “scientific” (I.e., more like a natural science), we’ve skipped over deciding how we figure out what counts as knowledge, and so we commit academico-onanism over method. In this regard— having an epistemology—anthropology and even theology have entered the 20th century, while we remain mired in the late 18th at best.