International Organization, which is almost certainly the most influential journal in the field, describes its mission as ” publish[ing] the best and most innovative scholarly manuscripts on international relations.” According to website of the European Journal of International Relations, “Theoretically aware empirical analysis and conceptual innovation forms the core of the journal’s dissemination of International Relations scholarship…” The American Political Science Review informs potential authors that submissions “should be original, innovative, and well-crafted”
“Top journals” in the study of international relations and political science routinely invoke inadequate “novelty” as a reason for rejecting manuscripts. I assume the same is true in most academic disciplines. Questions about “novelty” often show up in complaints about the peer-review process. “Reviewers,” as Mark Humphries writes, are “gatekeepers, asked to judge the novelty and importance of the work, when novelty is arbitrary, and importance is often clear only in retrospect.”
What is “novelty?” We need to distinguish between two different ways that referees and editors use the term.
First, novelty sometimes means, more or less, original research. That is, if someone writes that “this manuscript isn’t sufficiently novel” they mean that it involves plagiarism, includes significant text recycling (frequently, and misleadingly, referred to as “self-plagiarism“), or is a “minimal–publishable unit” that slices the research salami too thinly.
If you’ve spent time looking at the curriculum vitas of scholars who publish a lot of peer-reviewed articles, then you know that a fair number include multiple publications that might charitably be described as “variations on a theme.” Whatever you think of this phenomena, it’s a straightforward, rational approach to participating in the publications arms race.
It’s not even necessarily a bad thing. Even in the digital age, different journals reach distinct audiences; I’ve read any number of manuscripts (and published articles!) that would be greatly improved by being broken up into two or three different pieces.
Still, efforts to extract as many publications as possible from a single chunk of research can, and do, go too far.
Second, novelty can refer to innovativeness. When journals talk about novelty, they often mean that they want to publish articles that challenge conventional wisdom, “shake things up,” or otherwise gives us something that most readers won’t have seen before. This sort of novelty entails introducing, for example, a new (for certain values of “new”) conceptual apparatus, theory, findings, or dataset.
Not all journals emphasize innovation in their guidelines. That’s no guarantee that editors and referees will not, in fact, reject an articles on the grounds that it “fails to make a novel contribution to the literature.” Graduate students figure out pretty quickly that a paper should be very explicit about how it “contributes” to the discipline. Hence the fact that almost every introduction in the field now includes a variation on “this article makes three contributions…”
(Yes, contributions, like Ramans, almost always come in threes.)
Because those contributions might not be enough to sway referees and editors, a non-trivial number of authors have taken to peppering their manuscripts with the word “novel” (and its synonyms). Authors make sure to inform readers that their articles present “novel theories,” “novel datasets,” and “novel findings.”
I get why scholars engage in this practice. I sympathize with the impulse. Surely, there’s no harm in hitting referees over the head until they fully understand that it might as well be titled “Novel MacNoveltyFace?” It seems like a sensible form of preemptive self-defense. Indeed, I suspect that I’ve done it myself.
Heck, I also cringe at the misuse of “extant” and “utilize” — which academics employ since… “existing” and “use” are the language of the unwashed masses, I guess? Yet I too once happily wrote about “extant” literature; I wasn’t talking about the myths and legends of Uruk.
Nonetheless, the proliferation of “novel” is a scourge.
I’ve seen things you people wouldn’t believe. .csv files created from off-the-shelf data described as “novel datasets.” Theories debated in the streets of Athens called “new.” All those “contributions” will be lost in time, like old articles in EBSCOhost…
Where was I? Right.
Some of my friends, because they have no souls, consider this a positive development. They argue that it streamlines the process of reading and reviewing articles.
I suppose they have a point.
Do you know what I think makes reading and reviewing articles less work?
- Not having to look at “three contributions” that consist of two vague genuflections toward bodies of research and one restatement of the thesis of the article;
- When authors actually explain what is different, interesting, and important about their dataset; and
- Papers that don’t claim to be advancing a brand spanking new, never-before-seen theory because they deal with ethnic violence, while those other articles that used the exact same theoretical logic dealt with communal violence.
As frustrating as I find slavish devotion to article templates, that’s not the real problem.
Scholarly knowledge is a social product, one that accumulates through ongoing dialogue and debate. The fickle demon that is academic success, though, focuses its incentives, punishments, and rewards on individuals. It should not come as a surprise that these two aspects of academia tend to work against one another.
The cult of novelty is a good example. Most of the downsides need no elaboration, but I think at least one deserves more attention: it puts pressure on scholars to misrepresent their work. This isn’t only a matter of creating incentives for academics to “inflate their claims in order to emphasize their novelty.” It also encourages them to misrepresent other scholarship by downplaying, effacing, or simply ignoring how much it overlaps their own.
I should be absolutely clear here: I don’t think many academics intentionally misrepresent “the literature” in order to make their work appear more innovative than it actually is. Even in an age of “Google alerts” and regular internet searches, it can be challenging to track scholarship in your own field. There’s just so much stuff, which also explains why people lose track of material they have read. It me.
As I’ve said for years, missing citations are not in of themselves ever reasons to reject a manuscript. I cannot think of anything that’s easier to do during revisions than add references.
When I was editing International Studies Quarterly, I saw multiple cases in which referees claimed that a manuscript recapitulates existing arguments but didn’t make a convincing case. Sometimes they pointed toward a huge literature without providing enough information to check (“if I can’t find any indication that it’s there after multiple Google Scholar searches, I’m going to assume there’s a reason you weren’t more specific”). Other times they mentioned specific books or articles but, for the life of me, I couldn’t figure out what they were talking about. I wouldn’t be surprised if I’ve done at least one of these things at some point in my career — and more than once.
If we expect referees to assess the novelty of a manuscript — and they do, at a minimum, need to help editors screen out unoriginal research — then we should expect them to generate false negatives. But even accounting for overzealous gatekeeping, we still will find plenty of examples of “novelty inflation” that look, at best, like the result of motivated reasoning.
What I find more troubling is that I’m starting to see similar (and, frankly, silly) rhetorical strategies in more and more manuscripts that I review.
The worst offender isn’t quite as ridiculous as “my theory is completely novel because it’s about ethnic violence and the existing ones are about communal violence.” But it’s pretty close. I’ve sometimes referred to the strategy as the “overly literal” or “semantic” approach to product differentiation: authors dismiss cognate theories and research because they don’t discuss or describe the same precise phenomena as the manuscript.
This could mean that the manuscript reinvents the theoretical wheel. It could mean that the manuscript leaves relevant alternative explanations on the table. It could mean both. Regardless, we get a weaker manuscript.
In sum, annoying academic prose is the most benign consequence of a foolish insistence on novelty. There are likely a variety of more pernicious effects. One of the most concerning is how it encourages scholars to misrepresent (often tangential) aspects of their work, and thus helps normalize a culture of low-level academic dishonesty.
Putting too much emphasis on novel, innovative contributions to the field probably doesn’t even achieve its stated aim. There are good reasons to believe that it leads to an emphasis on superficial forms of novelty, rather than the publication of actually innovative work. Getting into a top journal a manuscript needs to demonstrate a “novel contribution,” but defined and signaled in ways — a new dataset, a remix of existing theories — that don’t actually do a whole lot to challenge referees’ prior views.
Is there an alternative? I don’t know.
The intensity of the publishing arms race in the field means that “top” journals receive far more manuscripts — ones that would be publishable after a revise-and-resubmit process — than they can possibly move forward with. It’s no wonder editors and referees rely on “over-and-above” criteria to winnow the field. It’s an established, normatively acceptable heuristic for keeping control over a journal’s backlog. Plus, referees invoke it spontaneously.
These discussions often end with a call for journals to publish everything that’s “above the bar” (whatever that means), regardless of how much it adds to the “stock of knowledge.” I agree that, when it comes to contributions, we’re only wise after the fact.
I also believe it’s no accident that this suggestion pretty much only comes from scholars who primarily do statistical work. The proposal rests on the assumption that if only we put in enough effort, we can develop a clear, consensus threshold for acceptance. Some “quantitative” scholars think that’s possible. As far as I can tell, pretty much no one else does.
Saying “what, me worry?” in the face of an oversupply of manuscripts is certainly a choice; if we can’t find non-arbitrary ways to keep the acceptance rate down, why bother to try? Personally, I’d prefer that journal backlogs don’t stretch decades into the future.
Thoughts?
Editor’s note: this may or may not be the first in an occasional series of posts in which I complain about trends in academic publishing and manuscripts that I review.
Multiple good points, but prompted to comment only by the momentary joy experienced by the unexpected and sub silentio Blade Runner quote. Awesome
Aw, shucks.
Maybe journals should be more strict in demanding that novelty is really included. There is no use in having tons of meaningless articles that do not advance the discipline and when I see what gets published even in the best of journals I am often puzzled. “Why should we care about this stuff?” is probably the most common reaction I have to articles in IR.
This is of course related to the middle-range (or worse the simplistic hypothesis-testing) turn in IR. The discipline has essentially stopped asking the big questions, yet journals pretend they want novelty. IR has stopped being concerned with answering questions that are the very reason it exists in the first place and instead it is consumed with being “a normal science” producing tons of “normal research” which has a highly limited value in the absence of a framework that would provide an overarching meaning. As Kuhn wrote: “One somehow hesitates to call the literature that results scientific.”
Guess what: you cannot be a normal science before you get the basics right.
We need to invoke our work within the language of models! My two cents is that novelty can only be something formally theoretical (not necessarily math) and that happens rarely and thus most other stuff is neither novel nor even a theory but working within a novel theory either to sharpen it or to deal with what I would argue is the only other type of novelty, and that is some formal empirical method to helps us with the empirical implications of theory! There is no such thing as original data!