As my post on “open access” demonstrates, I’ve been thinking a lot about International Relations  journals over the last few months, particularly with respect to digital media. Charli’s excellent presentation on the discipline and “web 2.0″ fell at an interesting time for me, as I was working on a journal bid. My sense is that academic International Relations journals have a mixed record when it comes to fulfilling their varied functions in the field, and that better internet integration would help matters. This post seeks to make that case — albeit in a very preliminary way — but also might be read as a rumination on purpose of IR journals… and an attempt to raise questions about the state of journals within international studies. 

I guess a good place to start might be with the “official line” on academic journals. What are they for? The quasi-random people behind the wikipedia page on the subject write:

An academic journal is a peer-reviewed periodical in which scholarship relating to a particular academic discipline is published. Academic journals serve as forums for the introduction and presentation for scrutiny of new research, and the critique of existing research. Content typically takes the form of articles presenting original research, review articles, and book reviews.

We often hear about journals as sites for “leading” and “cutting-edge” research on particular topics and, depending on the journal, particular inflections. But, as many commentators point out, the time from submission to publication at many prestige journals now lasts at least year. Articles sometimes accumulate a great deal of citation and discussion by appearing at online depositories, such as SSRN. Indeed, work in International Relations  — most often quantitative — gets de facto peer reviewed many times before it appears in a journal. Indeed, this kind of peer review is arguably less stochastic and, in aggregate, more complete than what a manuscript receives at a journal.

My sense (and that, I believe, of many others) is that academic journals serve a number of purposes that are connected, but not always tightly coupled, to idealized accounts of what they’re good for.

  1. Professional certification. Leading journals are hard to get into. The volume of submissions, as well as the (related) attitudes of referees and editors, require a piece to “hit the jackpot” in terms of reviewer evaluations. Because referees and editors care about maintaining–and enhancing–the perceived quality of the journal, they work harder to make articles conform to the field or subfield standards of excellence. As we move down and across the journal hierarchy, these forces still operate but to lesser degrees. Thus, lower-ranked journals or journals perceived as being “easier to get into” provide less symbolic capital. 
  2. Defining standards of excellence. Another way of saying this is that journals produce, reproduce, and transform genre expectations for the style and content of scholarly work. What appears in leading journals sets standards for what should appear in leading journals; even if scholars don’t necessarily buy those standards, those attempting to publish in such journals will seek to replicate “the formula” in the hopes that it improves their chances of success. The same is true of less prestigious and more specialized journals, but those on the top of the hierarchy inflect as example (whether positive or cautionary) genre expectations associated with many of their less famous relatives. 
  3. Vetting work. Regardless of what one thinks of the state of peer review, it does provide a gauntlet that often improves–by some measure or other–the quality of the product. So does the attention of dedicated editors. At the very least, we believe this to be the case, which is all that matters for the role of journals in vetting scholarly pieces.
  4. Publicizing work. Scholars read journals–or at least tables of contents–that “matter” (i.e., have currency) in their subfield and in the broader field. So getting an article into a journal increases– subject to the breadth and depth of that journal’s reach–the chances that it will be read by a targeted audience. 
  5. Constituting a scholarly community. Much of the above comes down to shaping the parameters of, and interactions within, scholarly communities. These “purposes” of journals do so in the basic sense of allocating prestige, generating expectations, and so on. But they also contribute to a scholarly sphere of intellectual exchange–they help to define what we talk about and argue over. 

My claim is as follows: every one of these purposes is better met by embedding scholarly journals in Web 2.0 architectures and technologies, whether open-access or not, peer-reviewed or not. The particular advantage of these hybrids lies in vetting, publicizing, and constituting a scholarly community.

Digital environments promote post-publication peer review both by allowing comments on articles and by facilitating the publication of traditional “response” pieces. There’s no reason to believe that they undermine the traditional vetting mechanisms, as they handle core articles the same way as non-embedded academic journals.

Traditional journals, on the other hand, do a poor job of publicizing work; particularly older articles that disappear into the ether (or the bowels of the library). That’s why blogs such as The Monkey Cage have occupied such an important position in the landscape. A journal embedded in shifting content — blogs, blog aggregation, web-only features, promotion of timely articles and articles that speak to recent debates in other journals — keeps people coming back to the site and, in doing so, exposes them to journal content.

The advantages in terms of constituting and maintaining a scholarly community should be obvious. Web 2.0 integration promises to transform “inputs into community” into ongoing intellectual transactions among not only scholars, but also the broader interested community.

As alluded to above, this transformation is already occurring. But I worry about two aspects of its trajectory.

  1. The most “important” general journals in the field are way behind. 
  2. A number of the current experiments are operating in isolation from the online academic IR community, e.g., they produce “blog posts” that read like op-eds intended for the New York Times, and the only evidence of being in conversation with that community is in the form of desultory blogrolls.