Tag: publishing (page 1 of 2)

What Makes a Good Book Review: Some Editorial Advice

The following is a guest post by Andrew Owsiak, Associate Professor at the University of Georgia and Book Editor for International Studies Review. 

The race to push scholarly research into the world carries a few consequences, perhaps the most notable being that it proves challenging to stay up-to-date with what is published. To help with this, some journals, for example International Studies Review[1], publish reviews of recently released, scholarly books. These reviews offer great sources of information–to those wishing to remain abreast of current trends, seeking to incorporate relevant work into their own research output, and wanting to incorporate the latest studies into their classrooms. The value of this information, however, depends largely on how the reviewer writes his review. A reader who finds herself mired in jargon has no context in which to understand the review, while one facing only a series of generalities loses grasp of what the book is about.[2]

Mindful of the reader’s plight, I will offer some advice for those writing book reviews. I do this for two reasons. First, book review authors are often—although not exclusively—junior scholars with less publishing experience. As an editor, I enjoy seeing this. Book reviews can be a great, low-stakes (~1,000 words), point-of-entry into the publishing world. It familiarizes authors with the submission, editorial, and decision process, often without introducing the peer-review component. It also allows them to enter a dialogue with more established scholars (i.e., the book authors). Yet if we are to be fair to those writing the books, to the review authors, and to the readers of book reviews, it behooves us to offer review authors guidance about what a book review should and (probably) should not contain. How will they know otherwise? And this leads to my second motivation: nobody, to my knowledge, provides this advice comprehensively elsewhere.[3]

Before I continue, let me offer a couple caveats. First and foremost, I do not pretend to hold all the answers about what journals want book reviews to contain. I have, however, solicited, monitored, read, and issued decisions on a fair number of book reviews in conjunction with other members of our editorial team. This experience allows me to see some general trends, and I wish to speak to and about those—to increase the chances that a submitting author’s book review will be accepted (ultimately) for publication. I necessarily assume that the trends I see—and therefore, the advice I offer—remain applicable at other journals who publish book reviews, although I do not speak for them. Second, following the advice below will, I expect, increase an author’s chances of successfully publishing a book review, but it will not guarantee it. The stochastic component of the publication process always operates. In addition, different authors will succeed at following the advice to varying degrees. All this is to say that I want to be held blameless for individual publication results.

Having said all this, here is my advice:

Continue reading

Under Review: Cite This!

The boon and bane of our academic enterprise is that we get feedback all the time on our work.  Our work is better for it–that the hack-iest stuff I read is always stuff that is not submitted to any kind of refereeing process and relies instead on editors who seem to be blind to the hack-ness.   The bane is that, well, rejection and criticism can not only delay publication but also hurt feelings.  When well done, reviews further the enterprise.  However, sometimes, reviews seem to make the authors dance in relatively unproductive ways.  There have been lots of tweets and posts complaining about robustness checks–that reviewers have been asking authors to submit dozens (sometimes hundreds) of additional analyses.

My grievance du jour is something else–reviews that focus on stuff that one “should have cited.”

Continue reading

Marketing in Everything: Economics Edition

kissKrugman writes:

But neither I nor most economists are going to make the effort of puzzling through difficult writings unless we’re given some sort of proof of concept — a motivating example, a simple and effective summary, something to indicate that the effort will be worthwhile. Sorry, but I won’t commit to sitting through your two-hour movie if you can’t show me an interesting three-minute trailer.

Indeed.

Krugman concludes with the admonition that “nobody has to read what you write.” I wish this were more generally understood. I’ve read articles in Political Analysis about things I don’t care about using methods I’ll never master that were nevertheless riveting, and I’ve slogged through articles on topics I care passionately about in allegedly substantive journals that I never understood. There’s one article, which my co-author on a long-term project and I have read a half-dozen times, that completely escapes our ability to summarize. Adopting a useful frame and engaging with readers is always good.

I get the sense that some folks believe that engaging with readers means dumbing down their argument. Far from it! Engaging with readers means presenting a complex argument smartly. That’s much more challenging than making a complex argument obscure. Anyone can be recondite; only geniuses can be understood.
Continue reading

The Citation Gap: Results of a Self-Experiment

Both because of the unexpected direction yesterday took, and because I haven’t worked through my thoughts about any number of pressing current events, I thought I’d write about an experiment that I’ve been engaging in with my recent academic papers. You might recall the Maliniak, Powers, and Walter paper (soon to be out with International Organization) on citations and the gender gap. As Walter reported at Political Violence @ a Glance:

…. articles written by women in international relations are cited significantly less than articles written by men. This is true even if you control for institutional affiliation, productivity, publication venue, tenure, topic, methodology and anything else you can think of. Our hunch was that this gender citation gap was due to two things: (1) women citing themselves less than men, and (2) men tending to cite other men more than women in a field dominated by men.

After the wide-ranging discussion prompted by the piece, I decided to try to increase the number of women that I cited. Continue reading

Measuring Journal (and Scholarly) Outcomes

Another day, another piece chronicling problems with the metrics scholars use to assess quality. Colin Wight sends George Lozano’s “The Demise of the Impact Factor“:

Using a huge dataset of over 29 million papers and 800 million citations, we showed that from 1902 to 1990 the relationship between IF and paper citations had been getting stronger, but as predicted, since 1991 the opposite is true: the variance of papers’ citation rates around their respective journals’ IF [impact factor]  has been steadily increasing. Currently, the strength of the relationship between IF and paper citation rate is down to the levels last seen around 1970.

Furthermore, we found that until 1990, of all papers, the proportion of top (i.e., most cited) papers published in the top (i.e., highest IF) journals had been increasing. So, the top journals were becoming the exclusive depositories of the most cited research. However, since 1991 the pattern has been the exact opposite. Among top papers, the proportion NOT published in top journals was decreasing, but now it is increasing. Hence, the best (i.e., most cited) work now comes from increasingly diverse sources, irrespective of the journals’ IFs.

If the pattern continues, the usefulness of the IF will continue to decline, which will have profound implications for science and science publishing. For instance, in their effort to attract high-quality papers, journals might have to shift their attention away from their IFs and instead focus on other issues, such as increasing online availability, decreasing publication costs while improving post-acceptance production assistance, and ensuring a fast, fair and professional review process.

Continue reading

SAGE and the Duck of Minerva

This is just a short note to explain the appearance of the phrase “temporarily un-gated PDF” in Peter Henne’s guest post about contagion and the Syrian civil war.

We’ve been linking to academic articles for quite some time, but usually to the abstracts or random versions available on the web. But after The Monkey Cage announced a partnership with academic publishers to temporarily un-gate political-science articles, it occurred to me that nothing prevented us from asking publishers to do the same for the Duck of Minerva.

I’m pleased to announce the SAGE is the first to do so. Thanks to David Mainwaring for making this possible. Continue reading

(Peer/Non) Review

I understand that there’s been some recent blog-chatter on one of my favorite hobbyhorses, peer review in Political Science and International Relations. John Sides gets all ‘ruh roh’ because of an decades-old old, but scary, experiment that shows pretty much what every other study of peer-review shows:


Then, perhaps coincidentally, Steve Walt writes a longish post on “academic rigor” and peer review. Walt’s sorta right and sorta wrong, so I must write something of my own,* despite the guarantee of repetition.

Continue reading

Talking Academic Journals: Publishing the “Best Work”

Note: this is the second in a series of posts opening up issues relating to journal process for general discussion by the international-studies community.

All journals commit to publishing “the best work” that they receive within their remit. All journals aspire to publish “the best work,” period, within their specialization. This raises special challenges for a journal such as the International Studies Quarterly, which constitutes the “flagship” publication of the International Studies Association (ISA). The ISA is incredibly diverse. It includes members from all over the world–nearly half are based outside of North America–who work in different disciplines and within heterogeneous research cultures.  Continue reading

Talking Academic Journals: Collecting Data

Note: this is the first in what I hope will be a series of posts opening up issues relating to journal process for general discussion by the international-studies community.

Although many readers already know the relevant information, let me preface this post with some context. I am the incoming lead editor of International Studies Quarterly (ISQ), which is one of the journals in the International Studies Association family of publications. We are planning, with PTJ leading the effort, some interesting steps with respect to online content, social media, and e-journal integration–but those will be the subject of a later post. I have also been rather critical of the peer-review process and of the fact that we don’t study it very much in International Relations.

The fact is that ISQ by itself–let alone the collection of ISA journals and the broader community of cognate peer-reviewed publications–is sitting on a great deal of data about the process. Some of this data, such as the categories of submissions, is already in the electronic submission systems–but it isn’t terribly standardized. Many journals now collect information about whether a piece includes a female author. Given some indications of subtle, and consequential, gender bias, we have strong incentives to collect this kind of data.

But what, exactly, should we be collecting?
Continue reading

Tracking and Political-Science Journal Accountability

journaltracking

(click on the image to enlarge)

I’m usually cautious about linking to anything in the PSJR/PSR family of sites, but this strikes me as pretty interesting: a wiki devoted to tracking political-science journals. Contributors note the journal, the turnaround time, and information about what happened to the article. Despite the promulgation of end-of-year journal reports, the submission-to-review-to-outcome process remains a mystery to many. In general, more information is a good thing — especially considering how much influence peer-reviewed publications have on the allocation of status, prestige, and resources in the field.

Continue reading

Is there a Downside to Open Access Publishing?

Robert Farley’s post last week about how long the journal publication process is struck a chord. One of my journal articles took three years from submission to appearance and was gated (I had to get my own piece through inter-library loan since it came out and the library didn’t have a subscription for the most recent issues). I have often felt as Farley does: Continue reading

Is It the Gate or the Stuff Inside?

One of the topics online and at the ISA has been the gated-ness of academic writings.  Journal articles are almost always behind a paywall so that ordinary folks cannot get at them.  This is likely to change as many folks are now complaining and the threat of ditching academic publishers for the net may force the journal publishers into being responsive.  We are already seeing more journals temporarily providing open access to various articles and issues.

But, I am afraid, my friends, that is almost entirely irrelevant.  Why?  Because why would any ordinary person want to read a jargon filled hunk of social science?  That is, the academic articles we are produce are indeed intended to be read by other scholars, so paywall or not, these pieces are not accessible.

I am not advocating that journals and academics change the way articles are written.  Peer review and all that have problems, but I do think we need an intra-poli sci conversation, presenting our research to each other.

What we need to do is provide supplements to that intra-academic discussion so that our work can be digested by those who have not been trained in social science.  Folks should be required to provide, dare I say it, blog posts or something like it to journals when they submit their articles–a less arcane, more transparent, more accessible summary of the research paper that they seek to publish.  Then, when the article gets published in the journal, the journal’s website would post the blog post.  Yes, you can see abstracts already but they are too short (750 words is 3x 250 and 5 x 150), and they are not written for non-academic audiences.

Yes, it would require academics to develop their writing skills so that they can communicate beyond the academy, but most of us are getting public support one way or another.  So, we should be obligated to disseminate.

The funny thing is that ungating will be easier than my alternative.  Easier to get journal publishers to be threatened by the web and figure out ways to improve access than to get all academics to write 750 words more and in more everyday language.  Even though the latter is far cheaper in $ spent than the former.

Null Results

Chris Blattman links to a paper (PDF) that finds no relationship between oil production and violence. He comments:

Regardless what you believe, however, there’s a striking shortage of null results papers published in top journals, even when later papers overturn the results. I’m more and more sickened by the lack of interest in null results. All part of my growing disillusionment with economics and political science publishing (and most of our so-called findings). Sigh…

To which I say, “Yes. Yes. A thousand times yes!”

If we really care about truth-seeking in the social sciences, let alone our own discipline of political science, we would consider null results on significant issues of critical importance. We would certainly consider them more important than the far-too-common paper with a  “positive” result that results from, well, efforts to get to a positive result via “tweaking” (e.g.andalso).

Continue reading

New Page: Academia and Graduate School

I’ve put together a collection, albeit not a comprehensive one, of posts at the Duck of Minerva that focus on what might be called “the profession.” The link is now a tab (Academia and Graduate School) below our banner.

The rationale? Many of our most consistently popular pieces — including ones that still get significant hits years after their publication — fall into this category, so I think it might be a good service to try to consolidate links to them.

In theory, post labels should do that, but after seven years of myriad bloggers our “labels” are a disaster. We have over a thousand; they seem to break the blogger widget, which I have been unable to reinstall.

The page remains a work in progress. We’ll add more posts over time. Noticeable absences include Brian Rathbun’s cutting pieces on the discipline.

The Great Journal Impact Factor Race, Web 2.x, and the Evolution of the Academy

Back in May Robert Kelley touched off a discussion about Journal Citation Reports and impact factor rankings. Journal impact factor provides a textbook study in the consequences of a well-institutionalized but highly problematic quantitative measure. Impact factor is highly skewed, easily gamed, and somewhat arbitrary (two-year and five-year windows). Nonetheless, it drives a great deal of behavior on the part of authors, editors, and publishers.

Impact factor, of course, is just one objective in the pursuit of prestige. Editors, boards, and associations want the status that comes with being involved with a “leading journal.” Publishers want that prestige as well, but only for its intrinsic value. For publishers prestige, profile, status.. these matters because they separate the journals that a library “must have” from those that the library can do without. So journals such as International Studies Quarterly and European Journal of International Relations remain valuable and prestigious commodities even if they’ve had a few “bad years” in terms of impact factor; very few international-relations scholars, let alone librarians, are going to ditch them in favor of Marine Policy.

I’ve learned a great deal about impact factor and “prestige” over the course of two editorial bids; indeed, one of the things I’ve stressed is how far behind the curve most international-relations journals are at exploiting new media to boost citation counts and the general profile of the journal. Publishers think so too. Indeed, they’d like authors themselves to pick up some of the burden. Here’s an email from SAGE that a friend of mine sent along earlier today (each page is an image, so if you have trouble reading them click on each to enlarge):

This is pretty amazing stuff — on a number of levels.

SAGE covers virtually all the bases, from maintaining an Academia.edu account, to tweeting, to creating a website. They want their authors not only to self-promote on wikipedia, but also to take up blogging.
I’m not sure I’m cool with this. SAGE is asking academics to make significant time commitments. For most article authors, these commitments aren’t commensurate with the benefits they’ll receive. It isn’t as if taking of tweeting instantly makes you an important figure in your area of expertise. My sense is that the RSS feeds of most international-relations and political-science blogs have fewer than fifty subscribers, which suggests typical readership in the dozens. This means that the marginal benefits of the most intensive activities SAGE recommends aren’t likely to be worth the costs in time and effort. 
But journals don’t require these efforts to realize large payoffs. The most successful international-relations journals might achieve two-year impact factors of between three and four average citations per article. Once we get below the top few then we are talking about journals with between one and two average citations per article. The benefits for publishers such as SAGE then, is potentially quite significant. If all that effort generates fewer than ten additional citations for relevant articles, they still might see their journals (easily) catapulted up the rankings.
At the same time, I also feel a bit vindicated. This reinforces my sense — articulated best by Charli and Dan Drezer — that we’re going through a major transformation in the relationship between international studies and Web 2.x activities. Popular writers have long been doing — with the active encouragement of their publishers and agents — most of these things. Indeed, pretty much every author I’ve asked to interview for NBN’s SF and Fantasy channel maintains some combination of blog, website, twitter feed, Facebook presence, livejournal account, and so on. They have to: their income depends on their sales and their relationship with their readers.

The authors of the Duck aren’t exactly strangers to most of these methods of shameless self-promotion. Still, most of us got into new and social media for fun and community rather than for profit. I remember routinely having to justify my blogging activities to my friends, mentors, and colleagues. How times have changed.

At least those are a few of my disparate reactions. I wonder what our readers think.

Academic IR and the Information Age: Journals

As my post on “open access” demonstrates, I’ve been thinking a lot about International Relations  journals over the last few months, particularly with respect to digital media. Charli’s excellent presentation on the discipline and “web 2.0” fell at an interesting time for me, as I was working on a journal bid. My sense is that academic International Relations journals have a mixed record when it comes to fulfilling their varied functions in the field, and that better internet integration would help matters. This post seeks to make that case — albeit in a very preliminary way — but also might be read as a rumination on purpose of IR journals… and an attempt to raise questions about the state of journals within international studies. 

I guess a good place to start might be with the “official line” on academic journals. What are they for? The quasi-random people behind the wikipedia page on the subject write:

An academic journal is a peer-reviewed periodical in which scholarship relating to a particular academic discipline is published. Academic journals serve as forums for the introduction and presentation for scrutiny of new research, and the critique of existing research. Content typically takes the form of articles presenting original research, review articles, and book reviews.

We often hear about journals as sites for “leading” and “cutting-edge” research on particular topics and, depending on the journal, particular inflections. But, as many commentators point out, the time from submission to publication at many prestige journals now lasts at least year. Articles sometimes accumulate a great deal of citation and discussion by appearing at online depositories, such as SSRN. Indeed, work in International Relations  — most often quantitative — gets de facto peer reviewed many times before it appears in a journal. Indeed, this kind of peer review is arguably less stochastic and, in aggregate, more complete than what a manuscript receives at a journal.

My sense (and that, I believe, of many others) is that academic journals serve a number of purposes that are connected, but not always tightly coupled, to idealized accounts of what they’re good for.

  1. Professional certification. Leading journals are hard to get into. The volume of submissions, as well as the (related) attitudes of referees and editors, require a piece to “hit the jackpot” in terms of reviewer evaluations. Because referees and editors care about maintaining–and enhancing–the perceived quality of the journal, they work harder to make articles conform to the field or subfield standards of excellence. As we move down and across the journal hierarchy, these forces still operate but to lesser degrees. Thus, lower-ranked journals or journals perceived as being “easier to get into” provide less symbolic capital. 
  2. Defining standards of excellence. Another way of saying this is that journals produce, reproduce, and transform genre expectations for the style and content of scholarly work. What appears in leading journals sets standards for what should appear in leading journals; even if scholars don’t necessarily buy those standards, those attempting to publish in such journals will seek to replicate “the formula” in the hopes that it improves their chances of success. The same is true of less prestigious and more specialized journals, but those on the top of the hierarchy inflect as example (whether positive or cautionary) genre expectations associated with many of their less famous relatives. 
  3. Vetting work. Regardless of what one thinks of the state of peer review, it does provide a gauntlet that often improves–by some measure or other–the quality of the product. So does the attention of dedicated editors. At the very least, we believe this to be the case, which is all that matters for the role of journals in vetting scholarly pieces.
  4. Publicizing work. Scholars read journals–or at least tables of contents–that “matter” (i.e., have currency) in their subfield and in the broader field. So getting an article into a journal increases– subject to the breadth and depth of that journal’s reach–the chances that it will be read by a targeted audience. 
  5. Constituting a scholarly community. Much of the above comes down to shaping the parameters of, and interactions within, scholarly communities. These “purposes” of journals do so in the basic sense of allocating prestige, generating expectations, and so on. But they also contribute to a scholarly sphere of intellectual exchange–they help to define what we talk about and argue over. 

My claim is as follows: every one of these purposes is better met by embedding scholarly journals in Web 2.0 architectures and technologies, whether open-access or not, peer-reviewed or not. The particular advantage of these hybrids lies in vetting, publicizing, and constituting a scholarly community.

Digital environments promote post-publication peer review both by allowing comments on articles and by facilitating the publication of traditional “response” pieces. There’s no reason to believe that they undermine the traditional vetting mechanisms, as they handle core articles the same way as non-embedded academic journals.

Traditional journals, on the other hand, do a poor job of publicizing work; particularly older articles that disappear into the ether (or the bowels of the library). That’s why blogs such as The Monkey Cage have occupied such an important position in the landscape. A journal embedded in shifting content — blogs, blog aggregation, web-only features, promotion of timely articles and articles that speak to recent debates in other journals — keeps people coming back to the site and, in doing so, exposes them to journal content.

The advantages in terms of constituting and maintaining a scholarly community should be obvious. Web 2.0 integration promises to transform “inputs into community” into ongoing intellectual transactions among not only scholars, but also the broader interested community.

As alluded to above, this transformation is already occurring. But I worry about two aspects of its trajectory.

  1. The most “important” general journals in the field are way behind. 
  2. A number of the current experiments are operating in isolation from the online academic IR community, e.g., they produce “blog posts” that read like op-eds intended for the New York Times, and the only evidence of being in conversation with that community is in the form of desultory blogrolls.

Thoughts?

Open Access and IR Journals

Some time ago Thomas Rid had an amazing post arguing for an open-access revolution in our field. I won’t repeat the arguments here; you can read them for yourself. The open-access movement is showing signs of momentum. Indeed, at BISA/ISA in Edinburgh, a number of people agitated for open access for the Review of International Studies (RIS) at its relaunch event.

It seems that there are very few significant IR journals in a position to go open access. The obvious candidates would be journals associated with professional associations — in addition to RIS, that would include the International Studies Association journals, the European Journal of International Relations, and some others. But at least BISA and SGIR (soon to be EISA) use the revenue from the journals to support their activities. That leaves the independent foundation journals, such as International Organization, as the most likely candidates for moving to open access.

Open-access journals sustain themselves through some combination of subsidy and pay-for-publication. In essence, authors provide a fee upon acceptance if they want their articles to appear “in print.”It took PLoS — probably the most famous member of the open-access family — a number of years for revenues to exceed costs. I can imagine a lot of IR scholars recoiling at paying such a fee. The math suggests that their institutions (if they are associated with one) should be happy to fork over the money, as doing so is cheaper than subscribing to journals. But right now, at least, institutions already pay for standard IR journals, so the open-access journals represent an additional fee. This isn’t an issue if the institution is Harvard University, but it might be for smaller places — particularly if the fee comes out of cash-strapped Departmental coffers rather than scientific grants.
The graphic comes from the Chronicle of Higher Education, which, in 201, reported on a study highlighting the two biggest hurdles to open access:

A new survey of nearly 40,000 scholars across the natural sciences, humanities, and social sciences shows that almost 90 percent of them believe open-access journals are good for the research community and the individual researcher. But charges for publishing and the perception that open-access journals are of lower quality than traditional publications deter scholars from the open-access route, according to the Study of Open Access Publishing report, by an international team of researchers.

These concerns are likely to be a particular problem in IR. The aforementioned factors suggest that most open-access journals will be both digital-only and new. Given the field’s elitism concerning “journal hierarchy,” and its general conservatism when it comes to all things smacking of “web 2.0”, those are both significant barriers to success. I think it would be very difficult to ask IR scholars to pay-for-publication in an unranked, digital-only journal. While everyone knows this is the future, it isn’t clear how we will get there.

This reticence comes despite the fact that, if mid-tier blogs such as the Duck of Minerva are any indication, more people will read a given piece in an open-access digital journal than a typical one in a top-tier — let alone a second-tier — traditional journal.* Thomas Rid got access to the raw Taylor and Francis “most read”numbers and this is what he found:



These are, as Thomas notes, crude indicators. And blog posts are, in general, shorter and more accessible than academic articles. Still, they point to the advantages of ungated academic work, particularly if presented in the right way. It would be interesting to know the readership of the papers at e-ir, which might provide a better comparison.

Indeed, a few months ago PTJ and I had some discussions about starting a journal using a “non-traditional” model. We estimated our barebones costs at about ~$25k to pay for a graduate-student assistant, plus some unspecified amount to handle incidentals. Startup costs would probably run between $5-7K, and it would be best to have some money to subsidize undergraduate interns to help keep the technical side running. All of this assumes a journal that is, in essence, a labor of love. No money for course releases, travel and promotion, and all that other stuff. 

One idea was to publish volumes as e-books for .99$, but the economics don’t work and you wind up with a cheap, but still gated, product. The pay-for-play model would impose prohibitively expensive costs on authors, particularly in the context of a startup. And, of course, we both think that there are too many journals in the field already.

So the question remains: how to finance this kind of endeavor?

Still, there’s a certain attraction to the model.

An online open-access journal could firmly break with the tyranny of the quarterly volume. No more “online first” as an orphan, uncertain category. The editors simply need to keep the standards of the journal high — as reflected in quality and acceptance rate — and they can publish pieces whenever they are accepted and processed. Volume numbers would persist, but as temporal markers for the purpose of citation rather than as bundled artifacts.

Because the content would be ungated, it would be even easier to integrate the journal into a blogging and social-media environment than it would be for a traditional publication. One could build an intellectual community and ensure repeat visitors — and with them, greater likelihood that articles would be read and cited.

But, even if we could somehow come up with the funds, the experiment strikes me as pretty high risk. We would need to convince some high-profile scholars to provide quality pieces — ones good enough to survive rigorous peer review — to legitimize the endeavor. We’d need to convince reviewers to take it seriously. And there are a lot of other institutional barriers.

I guess what I’m talking about is, in essence, a Duck of Minerva journal, but (probably) with a less whimsical name. I wonder what our readers think of that?

*As I discovered while putting together a proposal for wrapping a journal in a webzine (see here for an example of poor implementation of a good idea) an undercount of the most-viewed pages at the Duck outdistances the download figures for the most-read piece at the American Political Science Review. And, as I alluded to earlier, neither KoW or the Duck are in the same league as Crooked Timber, The Monkey Cage, Steve Walt, Dan Drezner, or any number of higher-profile blogs. By the way, if any journal editors out there are interested in bringing me on to spearhead a web strategy likely to (among other things) increase your impact factor, contact me. 

What Exactly is the Social Science Citation Index Anyway?

jcr_medbner_availnow

Yeah, I don’t really know either. I always hear the expression ‘SSCI’ thrown around as the gold standard for social science work. Administrators seem to love it, but where it comes from and how it gets compiled I don’t really understand. Given that we all seem to use this language and worry about impact factor all the time, I thought I would simply post the list of journals for IR ranked by impact factor (after the break).

I don’t think I ever actually saw this list before all laid out completely. In grad school, I just had a vague idea that I was supposed to send my stuff to the same journals whose articles I was reading in class. But given that I haven’t found this list posted on the internet anywhere, here it is. I don’t know if that means it is gated or something, or if my school has a subscription, or whatever. Anyway, I thought posting the whole IR list would be helpful for the Duck readership.

But I have a few questions. First, why does Thomson-Reuters create this? Why don’t we do it? Does anyone actually know what they do that qualifies them for this ? And don’t say ‘consulting’ or ‘knowledge services’ or that sort of MBA-speak. The picture above includes some modernist, high-tech skyscraper, presumably to suggest that lots of brilliant, hi-tech theorists are in there crunching away big numbers (but the flower tells you they have a soft side too – ahh), but I don’t buy it. Are these guys former academics who know what we read? Who are they? Does anyone know? The T-R website tells you nothing beyond buzzwords like ‘the knowledge effect’ and ‘synergy.’ I am genuinely curious how T-R got this gig and why we listen to them. Why don’t we make our own list?

Anyway, I don’t really know, so I just thought I’d throw it out there. Check the IR rankings below.

More questions:

I am not sure if the SSCI and the Journal Citation Reports from T-R are different or not or what. Click here to see the SSCI list; and here is the JCR link, which is probably gated, but ask your administration; they probably have access. There are 3038 journals in the whole SSCI list (!), 107 listed under political science, and 82 under IR. There is some overlap between the last two, but the PS list does not completely subsume the IR list, as I think most of us would think it should. For example, IS is listed only under IR, not political science, but ISQ is listed under both, even though I think most people would say IS is a better journal than ISQ. Also, there is no identifiable list for the other 3 subfields of political science. I find that very unhelpful. More generally, I would like to know how T-R chooses which journals are on the SSCI and which not. It doesn’t take much effort to see that they’re almost all published in English…

Next, I thought the SSCI was only peer-reviewed, but Foreign Affairs and the Washington Quarterly (which I understand to be solicited, not actually peer-reviewed – correct me if I am wrong) are listed on the IR list, and even Commentary and the Nation magazine are on the PS list. Wow – your neocon ideological ravings can actually count as scholarship. Obviously FA should be ranked for impact factor; it’s hugely influential. But does it belong on the SSCI? Note also that ISR is listed on the IR roster, as is its old incarnation, the Mershon ISR. Hasn’t that been gone now for more than a decade? Also when you access the impact factors (below),T-R provides an IR list with its ‘Journal Citation Reports’ that has only 78 journals listed for IR, not 82. So the SSCI for IR (82) does not quite equal the JCR for IR (78). Is that just a clerical error? If so, does that mean the super-geniuses in the futuristic skyscraper are spending too much time looking out the windows at the flowers? I guess if you double-count M/ISR, you get 79, which is pretty close to 82, but given how definitive this list is supposed to be, it seems like there are problems and confusions.

2010 is the most recent year T-R provides a ranking, so I used that, plus the rolling 5-year impact factor. The ranking on the left follows the 5 year impact factor, not the 2010 one.

A few things leap out to me:

1. How did International Studies Perspectives rocket up so high in less than 15 years, higher than EJIR, RIPE, and Foreign Affairs? Wow. I guess I should read it more.

2. What is Marine Policy (no. 11) and how did it get so very high also?

3. Security Studies at 27 doesn’t sound right to me. We read that all the time in grad school.

4. A lot of the newest ones, at the bottom without a 5-year ranking, come from Asia. That isn’t surprising, as Asian countries are throwing more and more money at universities. That’s probably healthy in terms of field-range, to move beyond just Western-published ones.

5. Why haven’t I ever even heard of something like half of these journals? I guess we really are a hermeneutic circle – reading just the same journals again and again – APSR, IO, IS, ISQ, EJIR. That’s pretty scholastic when this IR SSCI list shows a rather interesting diversity I never have time to read. A shame actually…

Rank                 Title                          2010 Impact Factor      5-Year Impact Factor

clip_image002[15]

1

111

INT ORGAN 3.551 5.059
2 INT SECURITY 3.444 4.214
3 WORLD POLIT 2.889 3.903
4 J CONFLICT RESOLUT 1.883 3.165
5 INT STUD QUART 1.523 2.427
6 INT STUD PERSPECT 0.719 2.344
7 EUR J INT RELAT 1.426 2.337
8 FOREIGN AFF 2.557 2.263
9 COMMON MKT LAW REV 2.194 2.071
10 J PEACE RES 1.476 2.036
11 MAR POLICY 2.053 1.961
12 INT J TRANSIT JUST 1.756 1.923
13 INT RELAT 0.473 1.743
14 JCMS-J COMMON MARK S 1.274 1.643
15 INT STUD REV 0.803 1.621
16 REV INT POLIT ECON 0.861 1.519
17 SECUR DIALOGUE 1.6 1.51
18 INT AFF 1.198 1.496
19 CONFLICT MANAG PEACE 0.682 1.423
19 EUR J INT LAW 1.5 1.423
21 WORLD ECON 0.878 1.382
22 STUD COMP INT DEV 0.605 1.352
23 BIOSECUR BIOTERROR 1.26 1.265
24 REV WORLD ECON 0.966 1.201
25 REV INT STUD 0.98 1.177
26 MILLENNIUM-J INT ST 0.727 1.084
27 SECUR STUD 0.766 1.065
28 FOREIGN POLICY ANAL 0.7 1.032
29 TERROR POLIT VIOLENC 0.814 0.946
30 AM J INT LAW 0.865 0.858
31 GLOBAL GOV 0.8 0.848
32 PAC REV 0.683 0.791
33 ALTERNATIVES 0.357 0.776
34 LAT AM POLIT SOC 0.34 0.731
35 STANFORD J INT LAW 0.6 0.727
36 WASH QUART 0.65 0.721
37 CORNELL INT LAW J 0.541 0.693
38 COLUMBIA J TRANS LAW 0.741 0.671
39 J JPN INT ECON 0.444 0.662
40 COMMUNIS POST-COMMUN 0.211 0.64
41 B ATOM SCI 1.057 0.632
42 INT INTERACT 0.258 0.622
43 SURVIVAL 0.472 0.615
44 EMERG MARK FINANC TR 0.444 0.558
45 INT J CONFL VIOLENCE 0.586 0.524
46 OCEAN DEV INT LAW 0.282 0.518
47 AUST J INT AFF 0.508 0.517
48 J STRATEGIC STUD 0.344 0.491
49 SPACE POLICY 0.308 0.381
50 MIDDLE EAST POLICY 0.219 0.309
51 ISSUES STUD 0.13 0.284
52 WAR HIST 0.265 0.262
53 KOREAN J DEF ANAL 0.304 0.261
54 CURR HIST 0.139 0.19
55 WORLD POLICY J 0.144 0.164
56 J MARIT LAW COMMER 0.244 0.15
57 INT POLITIK 0.017 0.042
58 INT POLIT-OSLO 0.013 0.024
59 ASIA EUR J 0.237
59 ASIAN J WTO INT HEAL 0.333
59 ASIAN PERSPECT-SEOUL 0.326
59 BRIT J POLIT INT REL 1.025
59 CAMB REV INT AFF 0.18
59 CHIN J INT LAW 0.206
59 COOP CONFL 0.868
59 INT POLITICS 0.564
59 INT RELAT ASIA-PAC 0.676
59 J HUM RIGHTS 0.34
59 J INT RELAT DEV 0.429
59 J WORLD TRADE 0.398
59 KOREA OBS 0.292
59 N KOREAN REV 0.75
59 PAC FOCUS 0.459
59 REV DERECHO COMUNITA 0.098
59 REV INT ORGAN 0.971
59 STUD CONFL TERROR 0.588
59 ULUSLAR ILISKILER 0.224
59 WORLD TRADE REV 1.231

Are IR Titles Getting Increasingly Boring? Evidence from a Data Set

Abstract

Though scholars widely claim that they are capable of writing creative titles, there exist some notable skeptics. Resolving this debate requires empirical evidence. However, beyond a few anecdotes, no one has systematically tested trends in the mind-numbing dullness of IR article titles. I correct this lacuna through the use of an original data set containing eight independent measurements of the originality and wittiness of article titles. Using various statistical techniques, I find that, for article appearing in six leading journals between 1985 and 2005, titles are indeed becoming more boring over time. In addition to confirming a depressing decline in titular creativity, my study reveals two additional findings of significance. First, titles that take the form of “historical quotation: explanation of what the article is actually about” are only interesting for the years 1985-1995, after which they become extremely boring. Second, the most consistently insipid article titles consist of  a putative correlation expressed as a question followed by an independent clause alluding to a data set. My findings and research methods have important implications for the field, as I assert repeatedly in the body of this article despite overwhelming evidence to the contrary.

‘Bleg’: How Long are your ‘Revise & Resubmit’ Letters to the Editor?

editor

I have been asked to revise and resubmit an article submitted for an IR journal. But it’s a big r&r; the editor even said it would be “a great deal of work” (groan). While I must make the changes to the ms, I must also submit a letter to the editors and reviewers to explain my changes. That’s normal of course, but I wonder how the community would appraise the proper length of a letter to the editor for a major r&r? In my last r&r, thankfully a minor, I wrote 2-3 pages. But for a major r&r that “needs a great deal of work’’, I was thinking around 10 pages. Is that too much? Would that you bore you to tears ? (Actually, don’t answer that.)

More generally, I think this is an interesting, undiscussed question for the field, because I have no idea if there are any norms at all on this. I can’t recall discussing this issue ever in graduate school (probably because I couldn’t have gotten an r&r anyway and didn’t even know what r&r meant). Nor can I recall seeing anything on this in all those journals we get from APSA (so many…). So whadda ya think?

Cross-posted on Asian Security Blog.

Older posts

© 2018 Duck of Minerva

Theme by Anders NorenUp ↑