I thought I would offer my take on the series of posts below on how to reform the peer-review system. Part of the problem is bad reviews. We can remove the anonymity of them, but that raises the obvious problems (bad blood, retaliation, pulling punches, speaking truth to power, etc). A better solution would be to rely on the expertise of editors. A really stupid review is obvious to even non-experts. And if one reviewer says X and the other says ‘opposite of X’, then you should know that you have to get a third opinion, not just reject the piece for not garnering sufficient support from the beginning (this is Dan’s point).
Why can’t editors do this? The biggest problem seems to be workload. They just don’t have the time to carefully sort through reviews and evaluate them as critically as the paper under consideration. But how do we lessen their workload? Write less. Read more. This goes for graduate students and professors. Graduate students are “socialized” earlier these days and this generally means they are pushed to publish. The tenure standards are higher everywhere now. None of this is good for the field. All of this means too many papers that don’t tell us very much. And they make editors’ jobs impossible.
Political science and IR today reminds me of elementary education. Most everyone in educational psychology seems to agree that kids really shouldn’t be pushed early on, that the social part of early grade school is more important than the academic, that they aren’t cognitively ready to do proper book learning. Yet we expect more academically of our little ones than ever before. They screen them in my town for their competence in the ABC’s upon entering kindergarten, then tell us what to work on over the summer. Go f*&k yourself.
Most everyone also agrees that some kids, particularly boys, develop more slowly intellectually, but of course eventually catch up. And also that it is better to think creatively rather than to do rote learning. Yet homework starts in first grade and the first thing they focus on is penmanship. Ugh.
Without saying that grad students are just kids, we have to recognize that there is a growth process in graduate school, and frankly even in the assistant professor stage. By pushing younger academics to write too early and too much, we deny them to chance to read widely and think more creatively. We also encourage people to be carbon copies of their advisors because if people don’t have a chance to develop their unique voice they will merely parrot their mentors. The metric of success becomes a simple process of counting journal articles and where they are placed, not WHAT THEY SAY THAT IS NEW AND INTERESTING. This gets to some of my earlier complaints in Stuff Political Scientists Like, that we fetishize new data to the detriment of new ideas.
This is partly because the people who are doing the judging (tenured faculty members, journal reviewers, search committees) are themselves writing too much and don’t have time to properly read and critically evaluate other people’s work. We have created an academic environment where everyone is in his or her own bubble. No wonder we all get (and write) lousy reviews. I personally don’t keep up with the journals as I would like to and this makes me sad.
Maybe I am idealizing the (fairly recent) past, and maybe Berkeley was just different, but when I was in grad school, not so long ago, grad students didn’t judge academics by how many APSR articles they had published, but by their ideas. Oh, Schweller, he is the ‘balance of interest’ guy. Oh, Moravcsik, he is the ‘liberal intergovernmentalist.’ I frankly had no idea about how much people had published, only what they had published.
Of course, this probably cannot be changed because it is an arms race. I can’t tell my students to slow down, to stop and think, because they will get cut off at the knees by the more ‘socialized’ grad students in other programs on the job market. Or they might not get tenure at a university that simply counts beans. I find this not only sad but also deleterious to the discipline. I can’t do much about it, but my kids are going to spend the summer in the yard, not with workbooks.
“I personally don’t keep up with the journals as I would like to and this makes me sad.”
Total agreement.
Brian, you hit so many nails on the head here! Also, you are a great Dad.Â
One thing you didn’t mention… by “write less” you are probably urging people to submit fewer and more polished articles, but would it also help or hurt for reviewers to write less? Should we learning how to critique more succinctly? This would also reduce editors’ workload, though I don’t know if it would also affect the quality of reviews. Thoughts?
Great entry, Brian. I agree strongly with many of your points.
I am dubious, however, about the proposal to improve the review process by relying more on the expertise of editors. In many of the debates over the state of the discipline, we hear scholars kvetching about how the field has become myopic and intolerant in large part because of editorial gatekeeping at the top journals.
In my opinion, I think we need an open and frank discussion about editorial board structures and processes in IR journals (and book series, for that matter). Every year at ISA, there is a roundtable of IPE journal editors (I’ve been on this the last few years to represent RIPE, as have editors from many other journals such as IO). What always strikes me is the huge variation in the way journals are managed. Some are very hierarchical, with lots of room for editorial interference or discretion (to be fair, this autonomy is not always abused by the editors – such claims are too easily asserted by disgruntled authors who have been rejected after review or have had their papers screened). Others have very different governance structures, where the authority over reviewer selection and publication decisions is much more diffused. The selection of editors is often non-transparent and non-competitive. It can also have a self-perpetuating effect of reifying trends and biases in the field when exiting editors pick like-minded successors (although sometimes this is done to ensure sufficient substantive expertise on the editorial team).
Disenchantment with editorial control also contributes to another problem you identify: the proliferation of journals in our field (as disgruntled scholars seek to create space for their work by starting new journals) and our growing inability to keep up with everything. We may need to write less by limiting the outlets.
Speaking as an editor I’m all for greater editorial discretion, as long as it is understood that an editor’s job is not to gate-keep but to help authors develop their arguments. Editors are midwives, or record producers; we’re not cullers of a herd that is too large to keep living on the game reserve. And yes indeed, this takes more time, so reducing the overwhelming flow of manuscripts (JIRD is a solid second-rank European IR journal and we get well north of a hundred submissions a year, a number which looks to only increase as more and more assistant professors and lecturers look for Thompson-Reuters-ISI-listed places to publish yet another version of the same argument they’ve made elsewhere, but that’s what one needs to do to get tenure and promotion these days) would be a great goodness. Yes, we reject stuff all the time, but we reject stuff based largely on the amount of work it would take to turn the argument into something publishable, work that sometimes entails a fundamental re-thinking of the research design and the use of evidence and the theory utilized in the explanation.
But as you note, it all starts with and goes back to those employment pressures, which lead to a counterproductive notion that the point of peer review is to weed out crap so that the good stuff survives and gets published — which is, frankly, silly. Peer review provides a veil of anonymity so that scholars can engage with scholarly arguments on their merits, in the hope of making them better by pointing out out where they can be improved. Editorial discretion does the same thing, but non-anonymously. The point is improving the argument, making it the best it can be, and everything else — including implications for employment and promotion — is a distraction from the task of scholarly engagement.
So that’s the challenge: with a flooded marketplace of job applicants, and a set of public and private regulators craving numerical measurements of scholarly productivity, how does one preserve the intellectual integrity of the task of scholarly argument? Maybe we need to craft a new metric for scholarly contribution, something that could serve as a rebuttal to the charge that academic X is a better scholar because she’s published 27 articles in top-5 journals that say roughly the same thing with minor variations while academic Y has only published 15 articles, one in a top-5 journal, but all of them make original contributions. “Originality Quotient”?
Brian writes, “I personally don’t keep up with the journals as I would like to and this makes me sad.” Jarrod Hayes, in comments, is in “total agreement.”
I’m sympathetic to theirs plight, but then again, as Brian also noted, too many scholars write “too many papers that don’t tell us very much.” PTJ adds, “assistant professors and lecturers look for Thompson-Reuters-ISI-listed
places to publish yet another version of the same argument they’ve made
elsewhere.”
It sounds like none of us should sweat a big stack of unread journals too much. The trick, of course, is figuring out which articles must be read. I cannot offer a tangible way to implement PTJ’s proposed “originality quotient,” but I do think he’s on the right track.Â
I’ve made great use of prior efforts on this blog (and Drezner’s) to highlight truly innovative and high quality articles in IR theory, security studies, etc. Maybe we should be posting more of those kinds of pieces — and seek substantial contributions in comments from our readership.
That said, I do worry that our crowd’s wisdom would be somewhat tainted by lack of diversity and independence — problems often worsened by quality graduate education because everyone is originally trained to read the same books and to think through the same handful of paradigmatic prisms.
That’s very true, Rodger. When I did have a chance to go back and look over 10 years of good journals, I didn’t find a lot of things that I thought were vital for my IR graduate syllabus. But I did find some, and they are harder to find because there is too much fluff and not enough time. Writing less would help on both scores!
Also, we should be highlighting good work, like the Monkey Cage does. I agree totally. But they generally highlight only a particular kind of work. Let’s do better.
here here