Tag: publishing (page 1 of 2)

Want to fix peer review? Standardize appeals

It’s happened to all of us. You get that email “Decision on Manuscript…,” open it with a bit of trepidation, just to find a (hopefully) politely worded rejection from the editor. Sometimes this is justified. Other times, however, the rejection is due to the legendary “Reviewer #2,” a cranky, ill-informed, hastily written rant against your paper that is not at all fair. The details can vary–they don’t like your theoretical approach, don’t understand the methods, are annoyed you didn’t cite them–but the result is the same: thanks to a random draw from the editor’s reviewers list you’ve got to move on.

We all seem to agree this is a problem. Peer review is finicky, and often relies on gate-keepers who can fail to objectively assess work. The pressure to publish for junior faculty and grad students is immense. And editors are over-worked and overwhelmed. Dan Nexon provided a great service recently by writing a series of posts on his experience at International Studies Quarterly. This gave a lot of insight into this often opaque process, and got me thinking about what to do with the above situation.

Continue reading
Share

Reflections on Journal Editing: Caveats

Josh asked me if I would write a series of posts at the Duck of Minerva reflecting on my time editing International Studies Quarterly (ISQ). I agreed.

This post is less a reflection that some background and caveats. I figure that by collecting them in a single post, I won’t have to junk up subsequent entires in this series. I’ll just refer back to what I’ve written here.

Background. I formally edited ISQ from 2014-2018, although my team started to handle new manuscripts in October of 2013. I headed up a very large team. At peak, it included as many as fourteen academic editors and two managing editors. So my job was as much about oversight as about handling specific submissions. I won’t bore readers with a long discussion of process. You can read about our procedures in our annual reports.

ISQ is the “flagship” journal of the International Studies Association (ISA). This matters for three reasons.

First, “association journals” (such as ISQ) are answerable to external leadership. Their editors depend on the explicit or tacit support of that leadership when it comes to journal policy. Some policies are mandated by the association.

Second, association journals have a duty to the various constituencies of their parent organization. In principle, ISQ should be open to any of the kind of work produced by ISA’s intellectually and geographically diverse membership. It therefore has a responsibility to represent different methods, theoretical frameworks, and substantive areas of research.

Third, although ISQ has middling rankings in some indices—such as the infamous “Impact Factor”—it scores well on subjective rankings of prestige and enjoys significant visibility.

The combination of ISQ‘s relative pluralism and its visibility mean that, as far as I know, it receives more submissions than any other peer-reviewed journal in international studies. But it also has a lot of space, so while it received 650+ submissions in my final year as lead editor, our acceptance rates hovered around 10-12%.

Some Caveats. My observations about the peer-review process and journal publishing are based on a single journal in a single field. They also come from a discrete slice of time. Overall submissions at international-studies journals continue to increase. The field continues to globalize. Expectations for scholarly publishing continue to evolve. All of this means that while some of my views may remain relevant for years, others are likely to become quickly outdated.

In my next post, I’ll start talking substance.

Share

Editors, we need to talk about robustness checks

It’s happened to all of us (or least those of us who do quantitative work). You get back a manuscript from a journal and it’s an R&R. Your excitement quickly fades when you start reading the comments. One reviewer gives a grocery list of additional tests they’d like to see: alternate control variables, different estimators, excluded observations. Another complains about the long list of robustness checks already in the manuscript, as it obscures the important findings. Sometimes both of these reviewers are the same person.

And it gets even more complicated if the article ends up rejected and you send it to another journal. Now that list of robustness checks–some of which were of questionable value–expands under a new set of reviewers’ comments. And those reviewers irritated by voluminous appendices get even more annoyed by all the tests included with little clear justification (“another reviewer told me to add this” not being an acceptable footnote).

Continue reading
Share

How Not to Get Cited

by Steve Saideman

Put “do not cite, do not circulate” on your paper.  I received a paper for the upcoming ISA which had that instruction on it.  I yelled at (ok, I mocked) my students last week for doing the same thing.  In the olden days, folks would put “do not cite” on their papers because they wanted to polish them before submitting, that they didn’t want to have errant results widely circulated.  Perhaps there is a fear that if a paper is circulated, it might get scooped.

But  NO!!!!

Continue reading
Share

What Makes a Good Book Review: Some Editorial Advice

The following is a guest post by Andrew Owsiak, Associate Professor at the University of Georgia and Book Editor for International Studies Review. 

The race to push scholarly research into the world carries a few consequences, perhaps the most notable being that it proves challenging to stay up-to-date with what is published. To help with this, some journals, for example International Studies Review[1], publish reviews of recently released, scholarly books. These reviews offer great sources of information–to those wishing to remain abreast of current trends, seeking to incorporate relevant work into their own research output, and wanting to incorporate the latest studies into their classrooms. The value of this information, however, depends largely on how the reviewer writes his review. A reader who finds herself mired in jargon has no context in which to understand the review, while one facing only a series of generalities loses grasp of what the book is about.[2]

Mindful of the reader’s plight, I will offer some advice for those writing book reviews. I do this for two reasons. First, book review authors are often—although not exclusively—junior scholars with less publishing experience. As an editor, I enjoy seeing this. Book reviews can be a great, low-stakes (~1,000 words), point-of-entry into the publishing world. It familiarizes authors with the submission, editorial, and decision process, often without introducing the peer-review component. It also allows them to enter a dialogue with more established scholars (i.e., the book authors). Yet if we are to be fair to those writing the books, to the review authors, and to the readers of book reviews, it behooves us to offer review authors guidance about what a book review should and (probably) should not contain. How will they know otherwise? And this leads to my second motivation: nobody, to my knowledge, provides this advice comprehensively elsewhere.[3]

Before I continue, let me offer a couple caveats. First and foremost, I do not pretend to hold all the answers about what journals want book reviews to contain. I have, however, solicited, monitored, read, and issued decisions on a fair number of book reviews in conjunction with other members of our editorial team. This experience allows me to see some general trends, and I wish to speak to and about those—to increase the chances that a submitting author’s book review will be accepted (ultimately) for publication. I necessarily assume that the trends I see—and therefore, the advice I offer—remain applicable at other journals who publish book reviews, although I do not speak for them. Second, following the advice below will, I expect, increase an author’s chances of successfully publishing a book review, but it will not guarantee it. The stochastic component of the publication process always operates. In addition, different authors will succeed at following the advice to varying degrees. All this is to say that I want to be held blameless for individual publication results.

Having said all this, here is my advice:

Continue reading

Share

Under Review: Cite This!

The boon and bane of our academic enterprise is that we get feedback all the time on our work.  Our work is better for it–that the hack-iest stuff I read is always stuff that is not submitted to any kind of refereeing process and relies instead on editors who seem to be blind to the hack-ness.   The bane is that, well, rejection and criticism can not only delay publication but also hurt feelings.  When well done, reviews further the enterprise.  However, sometimes, reviews seem to make the authors dance in relatively unproductive ways.  There have been lots of tweets and posts complaining about robustness checks–that reviewers have been asking authors to submit dozens (sometimes hundreds) of additional analyses.

My grievance du jour is something else–reviews that focus on stuff that one “should have cited.”

Continue reading

Share

Marketing in Everything: Economics Edition

kissKrugman writes:

But neither I nor most economists are going to make the effort of puzzling through difficult writings unless we’re given some sort of proof of concept — a motivating example, a simple and effective summary, something to indicate that the effort will be worthwhile. Sorry, but I won’t commit to sitting through your two-hour movie if you can’t show me an interesting three-minute trailer.

Indeed.

Krugman concludes with the admonition that “nobody has to read what you write.” I wish this were more generally understood. I’ve read articles in Political Analysis about things I don’t care about using methods I’ll never master that were nevertheless riveting, and I’ve slogged through articles on topics I care passionately about in allegedly substantive journals that I never understood. There’s one article, which my co-author on a long-term project and I have read a half-dozen times, that completely escapes our ability to summarize. Adopting a useful frame and engaging with readers is always good.

I get the sense that some folks believe that engaging with readers means dumbing down their argument. Far from it! Engaging with readers means presenting a complex argument smartly. That’s much more challenging than making a complex argument obscure. Anyone can be recondite; only geniuses can be understood.
Continue reading

Share

The Citation Gap: Results of a Self-Experiment

Both because of the unexpected direction yesterday took, and because I haven’t worked through my thoughts about any number of pressing current events, I thought I’d write about an experiment that I’ve been engaging in with my recent academic papers. You might recall the Maliniak, Powers, and Walter paper (soon to be out with International Organization) on citations and the gender gap. As Walter reported at Political Violence @ a Glance:

…. articles written by women in international relations are cited significantly less than articles written by men. This is true even if you control for institutional affiliation, productivity, publication venue, tenure, topic, methodology and anything else you can think of. Our hunch was that this gender citation gap was due to two things: (1) women citing themselves less than men, and (2) men tending to cite other men more than women in a field dominated by men.

After the wide-ranging discussion prompted by the piece, I decided to try to increase the number of women that I cited. Continue reading

Share

Measuring Journal (and Scholarly) Outcomes

Another day, another piece chronicling problems with the metrics scholars use to assess quality. Colin Wight sends George Lozano’s “The Demise of the Impact Factor“:

Using a huge dataset of over 29 million papers and 800 million citations, we showed that from 1902 to 1990 the relationship between IF and paper citations had been getting stronger, but as predicted, since 1991 the opposite is true: the variance of papers’ citation rates around their respective journals’ IF [impact factor]  has been steadily increasing. Currently, the strength of the relationship between IF and paper citation rate is down to the levels last seen around 1970.

Furthermore, we found that until 1990, of all papers, the proportion of top (i.e., most cited) papers published in the top (i.e., highest IF) journals had been increasing. So, the top journals were becoming the exclusive depositories of the most cited research. However, since 1991 the pattern has been the exact opposite. Among top papers, the proportion NOT published in top journals was decreasing, but now it is increasing. Hence, the best (i.e., most cited) work now comes from increasingly diverse sources, irrespective of the journals’ IFs.

If the pattern continues, the usefulness of the IF will continue to decline, which will have profound implications for science and science publishing. For instance, in their effort to attract high-quality papers, journals might have to shift their attention away from their IFs and instead focus on other issues, such as increasing online availability, decreasing publication costs while improving post-acceptance production assistance, and ensuring a fast, fair and professional review process.

Continue reading

Share

SAGE and the Duck of Minerva

This is just a short note to explain the appearance of the phrase “temporarily un-gated PDF” in Peter Henne’s guest post about contagion and the Syrian civil war.

We’ve been linking to academic articles for quite some time, but usually to the abstracts or random versions available on the web. But after The Monkey Cage announced a partnership with academic publishers to temporarily un-gate political-science articles, it occurred to me that nothing prevented us from asking publishers to do the same for the Duck of Minerva.

I’m pleased to announce the SAGE is the first to do so. Thanks to David Mainwaring for making this possible. Continue reading

Share

(Peer/Non) Review

I understand that there’s been some recent blog-chatter on one of my favorite hobbyhorses, peer review in Political Science and International Relations. John Sides gets all ‘ruh roh’ because of an decades-old old, but scary, experiment that shows pretty much what every other study of peer-review shows:


Then, perhaps coincidentally, Steve Walt writes a longish post on “academic rigor” and peer review. Walt’s sorta right and sorta wrong, so I must write something of my own,* despite the guarantee of repetition.

Continue reading

Share

Talking Academic Journals: Publishing the “Best Work”

Note: this is the second in a series of posts opening up issues relating to journal process for general discussion by the international-studies community.

All journals commit to publishing “the best work” that they receive within their remit. All journals aspire to publish “the best work,” period, within their specialization. This raises special challenges for a journal such as the International Studies Quarterly, which constitutes the “flagship” publication of the International Studies Association (ISA). The ISA is incredibly diverse. It includes members from all over the world–nearly half are based outside of North America–who work in different disciplines and within heterogeneous research cultures.  Continue reading

Share

Talking Academic Journals: Collecting Data

Note: this is the first in what I hope will be a series of posts opening up issues relating to journal process for general discussion by the international-studies community.

Although many readers already know the relevant information, let me preface this post with some context. I am the incoming lead editor of International Studies Quarterly (ISQ), which is one of the journals in the International Studies Association family of publications. We are planning, with PTJ leading the effort, some interesting steps with respect to online content, social media, and e-journal integration–but those will be the subject of a later post. I have also been rather critical of the peer-review process and of the fact that we don’t study it very much in International Relations.

The fact is that ISQ by itself–let alone the collection of ISA journals and the broader community of cognate peer-reviewed publications–is sitting on a great deal of data about the process. Some of this data, such as the categories of submissions, is already in the electronic submission systems–but it isn’t terribly standardized. Many journals now collect information about whether a piece includes a female author. Given some indications of subtle, and consequential, gender bias, we have strong incentives to collect this kind of data.

But what, exactly, should we be collecting?
Continue reading

Share

Tracking and Political-Science Journal Accountability

journaltracking

(click on the image to enlarge)

I’m usually cautious about linking to anything in the PSJR/PSR family of sites, but this strikes me as pretty interesting: a wiki devoted to tracking political-science journals. Contributors note the journal, the turnaround time, and information about what happened to the article. Despite the promulgation of end-of-year journal reports, the submission-to-review-to-outcome process remains a mystery to many. In general, more information is a good thing — especially considering how much influence peer-reviewed publications have on the allocation of status, prestige, and resources in the field.

Continue reading

Share

Is there a Downside to Open Access Publishing?

Robert Farley’s post last week about how long the journal publication process is struck a chord. One of my journal articles took three years from submission to appearance and was gated (I had to get my own piece through inter-library loan since it came out and the library didn’t have a subscription for the most recent issues). I have often felt as Farley does: Continue reading

Share

Is It the Gate or the Stuff Inside?

One of the topics online and at the ISA has been the gated-ness of academic writings.  Journal articles are almost always behind a paywall so that ordinary folks cannot get at them.  This is likely to change as many folks are now complaining and the threat of ditching academic publishers for the net may force the journal publishers into being responsive.  We are already seeing more journals temporarily providing open access to various articles and issues.

But, I am afraid, my friends, that is almost entirely irrelevant.  Why?  Because why would any ordinary person want to read a jargon filled hunk of social science?  That is, the academic articles we are produce are indeed intended to be read by other scholars, so paywall or not, these pieces are not accessible.

I am not advocating that journals and academics change the way articles are written.  Peer review and all that have problems, but I do think we need an intra-poli sci conversation, presenting our research to each other.

What we need to do is provide supplements to that intra-academic discussion so that our work can be digested by those who have not been trained in social science.  Folks should be required to provide, dare I say it, blog posts or something like it to journals when they submit their articles–a less arcane, more transparent, more accessible summary of the research paper that they seek to publish.  Then, when the article gets published in the journal, the journal’s website would post the blog post.  Yes, you can see abstracts already but they are too short (750 words is 3x 250 and 5 x 150), and they are not written for non-academic audiences.

Yes, it would require academics to develop their writing skills so that they can communicate beyond the academy, but most of us are getting public support one way or another.  So, we should be obligated to disseminate.

The funny thing is that ungating will be easier than my alternative.  Easier to get journal publishers to be threatened by the web and figure out ways to improve access than to get all academics to write 750 words more and in more everyday language.  Even though the latter is far cheaper in $ spent than the former.

Share

Null Results

Chris Blattman links to a paper (PDF) that finds no relationship between oil production and violence. He comments:

Regardless what you believe, however, there’s a striking shortage of null results papers published in top journals, even when later papers overturn the results. I’m more and more sickened by the lack of interest in null results. All part of my growing disillusionment with economics and political science publishing (and most of our so-called findings). Sigh…

To which I say, “Yes. Yes. A thousand times yes!”

If we really care about truth-seeking in the social sciences, let alone our own discipline of political science, we would consider null results on significant issues of critical importance. We would certainly consider them more important than the far-too-common paper with a  “positive” result that results from, well, efforts to get to a positive result via “tweaking” (e.g.andalso).

Continue reading

Share

New Page: Academia and Graduate School

I’ve put together a collection, albeit not a comprehensive one, of posts at the Duck of Minerva that focus on what might be called “the profession.” The link is now a tab (Academia and Graduate School) below our banner.

The rationale? Many of our most consistently popular pieces — including ones that still get significant hits years after their publication — fall into this category, so I think it might be a good service to try to consolidate links to them.

In theory, post labels should do that, but after seven years of myriad bloggers our “labels” are a disaster. We have over a thousand; they seem to break the blogger widget, which I have been unable to reinstall.

The page remains a work in progress. We’ll add more posts over time. Noticeable absences include Brian Rathbun’s cutting pieces on the discipline.

Share

The Great Journal Impact Factor Race, Web 2.x, and the Evolution of the Academy

Back in May Robert Kelley touched off a discussion about Journal Citation Reports and impact factor rankings. Journal impact factor provides a textbook study in the consequences of a well-institutionalized but highly problematic quantitative measure. Impact factor is highly skewed, easily gamed, and somewhat arbitrary (two-year and five-year windows). Nonetheless, it drives a great deal of behavior on the part of authors, editors, and publishers.

Impact factor, of course, is just one objective in the pursuit of prestige. Editors, boards, and associations want the status that comes with being involved with a “leading journal.” Publishers want that prestige as well, but only for its intrinsic value. For publishers prestige, profile, status.. these matters because they separate the journals that a library “must have” from those that the library can do without. So journals such as International Studies Quarterly and European Journal of International Relations remain valuable and prestigious commodities even if they’ve had a few “bad years” in terms of impact factor; very few international-relations scholars, let alone librarians, are going to ditch them in favor of Marine Policy.

I’ve learned a great deal about impact factor and “prestige” over the course of two editorial bids; indeed, one of the things I’ve stressed is how far behind the curve most international-relations journals are at exploiting new media to boost citation counts and the general profile of the journal. Publishers think so too. Indeed, they’d like authors themselves to pick up some of the burden. Here’s an email from SAGE that a friend of mine sent along earlier today (each page is an image, so if you have trouble reading them click on each to enlarge):

This is pretty amazing stuff — on a number of levels.

SAGE covers virtually all the bases, from maintaining an Academia.edu account, to tweeting, to creating a website. They want their authors not only to self-promote on wikipedia, but also to take up blogging.
I’m not sure I’m cool with this. SAGE is asking academics to make significant time commitments. For most article authors, these commitments aren’t commensurate with the benefits they’ll receive. It isn’t as if taking of tweeting instantly makes you an important figure in your area of expertise. My sense is that the RSS feeds of most international-relations and political-science blogs have fewer than fifty subscribers, which suggests typical readership in the dozens. This means that the marginal benefits of the most intensive activities SAGE recommends aren’t likely to be worth the costs in time and effort. 
But journals don’t require these efforts to realize large payoffs. The most successful international-relations journals might achieve two-year impact factors of between three and four average citations per article. Once we get below the top few then we are talking about journals with between one and two average citations per article. The benefits for publishers such as SAGE then, is potentially quite significant. If all that effort generates fewer than ten additional citations for relevant articles, they still might see their journals (easily) catapulted up the rankings.
At the same time, I also feel a bit vindicated. This reinforces my sense — articulated best by Charli and Dan Drezer — that we’re going through a major transformation in the relationship between international studies and Web 2.x activities. Popular writers have long been doing — with the active encouragement of their publishers and agents — most of these things. Indeed, pretty much every author I’ve asked to interview for NBN’s SF and Fantasy channel maintains some combination of blog, website, twitter feed, Facebook presence, livejournal account, and so on. They have to: their income depends on their sales and their relationship with their readers.

The authors of the Duck aren’t exactly strangers to most of these methods of shameless self-promotion. Still, most of us got into new and social media for fun and community rather than for profit. I remember routinely having to justify my blogging activities to my friends, mentors, and colleagues. How times have changed.

At least those are a few of my disparate reactions. I wonder what our readers think.

Share

Academic IR and the Information Age: Journals

As my post on “open access” demonstrates, I’ve been thinking a lot about International Relations  journals over the last few months, particularly with respect to digital media. Charli’s excellent presentation on the discipline and “web 2.0” fell at an interesting time for me, as I was working on a journal bid. My sense is that academic International Relations journals have a mixed record when it comes to fulfilling their varied functions in the field, and that better internet integration would help matters. This post seeks to make that case — albeit in a very preliminary way — but also might be read as a rumination on purpose of IR journals… and an attempt to raise questions about the state of journals within international studies. 

I guess a good place to start might be with the “official line” on academic journals. What are they for? The quasi-random people behind the wikipedia page on the subject write:

An academic journal is a peer-reviewed periodical in which scholarship relating to a particular academic discipline is published. Academic journals serve as forums for the introduction and presentation for scrutiny of new research, and the critique of existing research. Content typically takes the form of articles presenting original research, review articles, and book reviews.

We often hear about journals as sites for “leading” and “cutting-edge” research on particular topics and, depending on the journal, particular inflections. But, as many commentators point out, the time from submission to publication at many prestige journals now lasts at least year. Articles sometimes accumulate a great deal of citation and discussion by appearing at online depositories, such as SSRN. Indeed, work in International Relations  — most often quantitative — gets de facto peer reviewed many times before it appears in a journal. Indeed, this kind of peer review is arguably less stochastic and, in aggregate, more complete than what a manuscript receives at a journal.

My sense (and that, I believe, of many others) is that academic journals serve a number of purposes that are connected, but not always tightly coupled, to idealized accounts of what they’re good for.

  1. Professional certification. Leading journals are hard to get into. The volume of submissions, as well as the (related) attitudes of referees and editors, require a piece to “hit the jackpot” in terms of reviewer evaluations. Because referees and editors care about maintaining–and enhancing–the perceived quality of the journal, they work harder to make articles conform to the field or subfield standards of excellence. As we move down and across the journal hierarchy, these forces still operate but to lesser degrees. Thus, lower-ranked journals or journals perceived as being “easier to get into” provide less symbolic capital. 
  2. Defining standards of excellence. Another way of saying this is that journals produce, reproduce, and transform genre expectations for the style and content of scholarly work. What appears in leading journals sets standards for what should appear in leading journals; even if scholars don’t necessarily buy those standards, those attempting to publish in such journals will seek to replicate “the formula” in the hopes that it improves their chances of success. The same is true of less prestigious and more specialized journals, but those on the top of the hierarchy inflect as example (whether positive or cautionary) genre expectations associated with many of their less famous relatives. 
  3. Vetting work. Regardless of what one thinks of the state of peer review, it does provide a gauntlet that often improves–by some measure or other–the quality of the product. So does the attention of dedicated editors. At the very least, we believe this to be the case, which is all that matters for the role of journals in vetting scholarly pieces.
  4. Publicizing work. Scholars read journals–or at least tables of contents–that “matter” (i.e., have currency) in their subfield and in the broader field. So getting an article into a journal increases– subject to the breadth and depth of that journal’s reach–the chances that it will be read by a targeted audience. 
  5. Constituting a scholarly community. Much of the above comes down to shaping the parameters of, and interactions within, scholarly communities. These “purposes” of journals do so in the basic sense of allocating prestige, generating expectations, and so on. But they also contribute to a scholarly sphere of intellectual exchange–they help to define what we talk about and argue over. 

My claim is as follows: every one of these purposes is better met by embedding scholarly journals in Web 2.0 architectures and technologies, whether open-access or not, peer-reviewed or not. The particular advantage of these hybrids lies in vetting, publicizing, and constituting a scholarly community.

Digital environments promote post-publication peer review both by allowing comments on articles and by facilitating the publication of traditional “response” pieces. There’s no reason to believe that they undermine the traditional vetting mechanisms, as they handle core articles the same way as non-embedded academic journals.

Traditional journals, on the other hand, do a poor job of publicizing work; particularly older articles that disappear into the ether (or the bowels of the library). That’s why blogs such as The Monkey Cage have occupied such an important position in the landscape. A journal embedded in shifting content — blogs, blog aggregation, web-only features, promotion of timely articles and articles that speak to recent debates in other journals — keeps people coming back to the site and, in doing so, exposes them to journal content.

The advantages in terms of constituting and maintaining a scholarly community should be obvious. Web 2.0 integration promises to transform “inputs into community” into ongoing intellectual transactions among not only scholars, but also the broader interested community.

As alluded to above, this transformation is already occurring. But I worry about two aspects of its trajectory.

  1. The most “important” general journals in the field are way behind. 
  2. A number of the current experiments are operating in isolation from the online academic IR community, e.g., they produce “blog posts” that read like op-eds intended for the New York Times, and the only evidence of being in conversation with that community is in the form of desultory blogrolls.

Thoughts?

Share
« Older posts

© 2019 Duck of Minerva

Theme by Anders NorenUp ↑