Peer Review Bites (and Quacks)

12 December 2011, 1634 EST

 Apropos Brian’s justified rant against peer-review practices in our field, I thought I’d remind readers (or let new ones know) that the pathetic state of the peer-review system in political science is something of a running theme at the Duck of Minerva.

Samples include: Brian’s call to “read more and write less,” a note on the impact of one-strike rules given the stochastic quality of peer review, Laura’s discussion of anonymity, PTJ’s thoughtful thoughts on the subject, Bill Petti’s sharing of a Hitler peer-review video, my call to refuse to review for journals that don’t send decision letters to reviewers, a list of things not to say in a peer review, and five reasons academic peer review doesn’t work.

Looking back over those posts makes clear to me that we’ve never produced a comprehensive indictment of the state of peer review. Perhaps one will be forthcoming. But anecdotal indictments often serve just as well. Via Henry Farrell, just such an illustrative example involving an attempt to replicate findings purporting to demonstrate ESP:

Here’s the story: we sent the paper to the journal that Bem published his paper in, and they said ‘no, we don’t ever accept straight replication attempts’. We then tried another couple of journals, who said the same thing. We then sent it to the British Journal of Psychology, who sent it out for review. For whatever reason (and they have apologised, to their credit), it was quite badly delayed in their review process, and they took many months to get back to us.

When they did get back to us, there were two reviews, one very positive, urging publication, and one quite negative. This latter review didn’t find any problems in our methodology or writeup itself, but suggested that, since the three of us (Richard Wiseman, Chris French and I) are all skeptical of ESP, we might have unconsciously influenced the results using our own psychic powers. The story behind this is that Richard has co-authored two papers where he and a believer in psi both did the same experiment, and the believer found positive results but he didn’t. However, the most recent time they did this – which was the best-controlled and largest size – neither found results. This doesn’t exactly give hugely compelling evidence for an ‘experimenter effect’ in psi research, in our opinion. Here’s that last paper, if you’re interested.

Anyway, the BJP editor agreed with the second reviewer, and said that he’d only accept our paper if we ran a fourth experiment where we got a believer to run all the participants, to control for these experimenter effects. We thought that was a bit silly, and said that to the editor, but he didn’t change his mind. We don’t think doing another replication with a believer at the helm is the right thing to do, for the reason above, and for the reason that Bem had stated in his original paper that his experimental paradigms were designed so that most of the work is done by a computer and the experimenter has very little to do (this was explicitly because of his concerns about possible experimenter effects). So, after this very long and unproductive delay, we’re off to another journal to try again. How frustrating.