What’s the right amount of reviews to do in a year? For my university’s annual performance review, I went through and counted how many I did last year, which included some grant and book proposals. I topped out around 40, which seemed like a high number to me.

How many is too many?
Is that actually a high number though? Perhaps that is just the price of seniority and being in the business for a good number of years. In my case, I get a lot of requests for reviews from journals outside political science, since I write on climate change impacts and water.

Is it fair?
We know that the reviewing processes is skewed and inequitable. Some people submit but won’t review much or at all and are regarded by journal editors as bad citizens. But most people simply do not get many review requests. Paul Djupe’s 2015 paper in PS found that my 40 reviews would be well above the norm:

At PhD-granting institutions, assistant professors averaged 5.5, associate professors averaged 7.0, and full professors averaged 8.3 in the past year; everyone else averaged just under 3 reviews a year.

Djupe, PS 2015

Last year, Djupe and his co-authors Amy Erica Smith and Anand Edward Sokhey, wrote a blog post here on the Duck noting that most scholars get few review requests and typically complete them, an average of five per year.

They also found some evidence, echoing a finding in the natural sciences, that some reviewers may be doing a lot of the reviewing:

At the same time, some social scientists do indeed receive a lot of requests. In both disciplines, the most sought-after decile of peer reviewers receives 20 or more requests per year. At the same time, some social scientists do indeed receive a lot of requests. In both disciplines, the most sought-after decile of peer reviewers receives 20 or more requests per year.

They found that in political science 16% of reviewers do half the reviews. While that might seem unfair, it might be because they are also submitting a lot. You also have to reflect on what your debt to other reviewers is, something also discussed in the Djupe et al. post.

For every piece you submit, you owe something back to the system. Perhaps the most predatory authors are those that submit a lot but don’t review. Djupe’s evidence suggests that this happens but a good chunk of productive scholars are also menschen when it comes to reviewing.

I have had discussions with colleagues that suggest different rules of thumb. One is 3 reviews for every 1 journal article you submit, on the premise that every article you write typically generates about three reviews. In response, some folks have raised the question of co-authored papers, suggesting that perhaps you don’t need to review as many papers per submission since that journal submission N is shared with other authors.

I asked my wife who is an Americanist what her rubric was. In her circle, she said the norm was 3 per month, which sounded like more or less what I had done last year but also an obscenely high rate of reviews. It made me wonder whether articles in her field were shorter than in international relations.

Is it sustainable?
All of this discussion raised a different question for me about the sustainability of this practice. We’re all doing most of these reviews for free, save for the occasional $100 or value in books you get from presses for reviewing for them.

Journals and presses then turn around and use the fruits of our labor to put up academic work behind a paywall, which also seems like a model that might increasingly generate pushback from academics given the proliferation of journals and submissions. That said, open access journals don’t seem like they have a sustainable business model either.

The Publons system, which tracks how many reviews you do, is a sort of social credit system and seeks to create a way to formally recognize the role of peer reviewers.

Dan’s tweet generated an interesting exchange with Djupe who had a far more favorable take on the enterprise. For Dan, Publons seems like another way for publishers to surveil the political science community in a business that does not compensate reviewers for their time.

The back and forth with Nexon and Djupe also includes interventions by Paul Musgrave who noted that he’s using Publons to keep track of everything as he wends his way towards tenure review. I do that informally by just counting the reviews that I wrote in a single folder. Do I need Publons for that?

Maybe my Publons profile will allow me to go straight to the good place in the after life…

Me

My initial reaction is that the formal metrification of everything could have unintended consequences. For example, I’m a deputy assistant editor at a journal where we’ve been asked to rate the quality of reviews. I suppose that evaluation process goes both ways like an Uber ride, and maybe there is some value of tracking really bad reviewers like the recent review we received that was a single sentence. We probably won’t ask that person to review again. I just am not sure how great it is to make it easy for the publishers to track all this information about me, when it is all free labor for them.

I think the more prosaic concern I have with Publons (having invested zero time in trying to figure out the process) is what value does it offer?
The value add (and maybe the site has that functionality already) would be something more efficient than ORCID to allow me to log in to all these reviewing sites without having to try to remember a proliferation of passwords.

In the meantime, I’ve got another email with a request for review while writing this post. Back to the grind.