Note: this is the first in what I hope will be a series of posts opening up issues relating to journal process for general discussion by the international-studies community.
Although many readers already know the relevant information, let me preface this post with some context. I am the incoming lead editor of International Studies Quarterly (ISQ), which is one of the journals in the International Studies Association family of publications. We are planning, with PTJ leading the effort, some interesting steps with respect to online content, social media, and e-journal integration–but those will be the subject of a later post. I have also been rather critical of the peer-review process and of the fact that we don’t study it very much in International Relations.
The fact is that ISQ by itself–let alone the collection of ISA journals and the broader community of cognate peer-reviewed publications–is sitting on a great deal of data about the process. Some of this data, such as the categories of submissions, is already in the electronic submission systems–but it isn’t terribly standardized. Many journals now collect information about whether a piece includes a female author. Given some indications of subtle, and consequential, gender bias, we have strong incentives to collect this kind of data.
But what, exactly, should we be collecting?
Demographic Data:Â To begin with, it strikes me that any data we collect about authors should also be collected for reviewers. If we want to better understand the effect–or lack thereof–of categorical attributes on the peer-review process, then we need to know about reviewers.
I am pretty confident that we should be collecting more granular data about the gender of authors. Whether one of the authors is a woman provides important data, but there’s no reason we can’t output, for example, MFM or FFF for tri-authored papers. Following current ISA conference guidelines, the specific author query is likely to look something like “Female/Male/Other/Decline to Answer.” Indeed, because demographic data can be sensitive, these efforts require a “decline to answer” option.
Other straightforward data–which we tend to have anyway–includes citizenship and/or country of residence. My initial thought was that we should collect race and ethnicity data, but I’m starting to see how daunting this endeavor will be for journals with significant non-American submissions. Both the kinds of answers, and their meanings, vary from country to country. Journals could develop conditional survey questions, but that doesn’t solve the fundamental problem of what options to provide for which answers. For example, should US citizens receive 2010 US census options?
And what about LGBTQ data? As astute readers will have noted, ISA currently allows an “other” option for conference registrants. But this doesn’t necessarily provide the best option for those who self-identity in non-heteronormative terms.
Citation Data:Â The field has started to make use of the wealth of data provided by citations in published papers, but what about unpublished papers? This would provide interesting information about the field in general, and in might tell us something important about differences among initial submissions, published pieces, and pieces that don’t survive the peer-review process.
Oddball Data:Â Does the local time of submission correlate with reviewer decisions, e.g., is there a “haven’t eaten lunch effect” that might be discernable underneath admittedly problematic data? What about other temporal factors, such as time of year? Does formatting correlate with publishing outcomes? There seem to be some interesting options here.
What do you all think?
A couple metrics I’d like to see
For published papers: time from first availability of the paper (usually this will be on a conference paper archive) to journal publication (electronically, then actual print)
For rejected papers: How many of these were published, say within five years “further down the food chain”
The first info you could get from the authors (maybe with a self-estimate of how much was in common between the conference version and the final version)
The second would take some work but would be the sort of thing an intern could do. Even just doing this for a random sample (particularly if you had records from the previous editorial team) would be interesting.
For published papers: [and your publisher won’t like this]: How many are available in a similar form on some open access site (author’s web site, university open access site, etc]
This. (Unsurprisingly.)
I think one issue you may face in saying something definitive about who does and doesn’t get published and what that looks like is that there might be events that occur before people even go on to submit their papers that don’t get factored in. I’m thinking of something like a grad seminar where a professor might encourage some students to attempt to publish their papers while not encouraging others (either because he thinks the topics are too esoteric or strange because he doesn’t do a lot of gender politics; or because of some bias he or she might have against non-English speakers, etc.). As a result, you might have people selecting out of the process before they even get to the journal submission stage. Then, you’ll be able to say that a certain percentage of papers by non-Americans or women or GLBT’s gets published, but that doesn’t take into account the fact that some fine papers might actually never even get refined and submitted.
I was recently speaking with some female colleagues and one individual was arguing that women refine and refine their drafts, thus submitting less but getting more of it published, while men are more likely to submit something less polished, get an R and R and go on to publish. Interesting theory. I have no idea whether or not you could investigate it using that data. But that’s the sort of pre-submission effect I’m talking about as well.
I think one issue you may face in saying something definitive about who does and doesn’t get published and what that looks like is that there might be events that occur before people even go on to submit their papers that don’t get factored in. I’m thinking of something like a grad seminar where a professor might encourage some students to attempt to publish their papers while not encouraging others (either because he thinks the topics are too esoteric or strange because he doesn’t do a lot of gender politics; or because of some bias he or she might have against non-English speakers, etc.). As a result, you might have people selecting out of the process before they even get to the journal submission stage. Then, you’ll be able to say that a certain percentage of papers by non-Americans or women or GLBT’s gets published, but that doesn’t take into account the fact that some fine papers might actually never even get refined and submitted.
I was recently speaking with some female colleagues and one individual was arguing that women refine and refine their drafts, thus submitting less but getting more of it published, while men are more likely to submit something less polished, get an R and R and go on to publish. Interesting theory. I have no idea whether or not you could investigate it using that data. But that’s the sort of pre-submission effect I’m talking about as well.
These are fantastic points; we have to be extremely careful about how we interpret whatever data journals collect, and not think that whatever we find resolves the issue.
Forget outcomes. Measure latency.
Don’t forget the other obvious measure. Who gets (and gives) longer reviews? Simple word counts of reviewers’ notes might say a lot.
My succinct response above is not anti-Mary but one valid empirical strategy based on these points.
FWIW, ISR collects (necessarily imperfect) demographic data about the reviewers and about those whom we ask to review, even if they decline. I can’t remember if this is required for the annual report of all ISA journals or not–I’m still pretty new at this–or whether its just an inherited ISR practice. But in any case, I haven’t been able to make heads or tails of patterns in the data yet.
Science journals have a date of submission and date published. That window appears to be much shorter than most poli sci journals. Not sure when they started this practice, but it would be interesting (similar to Phil’s suggestion) for ISQ to begin that practice. I don’t know if there is a way to experimentally test whether that has an influence on faster speed from submission to publication.
You can track to see what levels of people get published (Ivies, top 10, top 20, regions), what universities get published, methods (qualitative vs. stats. vs. game theory vs. mixed), policy relevance (Andy Bennett and Ikenberry analyzed this at some stage in an article in APSR).