Perhaps Our Incentives Are Not as Perverse as Believed: Are Citation Counts the Devil?

17 March 2017, 1104 EDT

I have regularly seen stuff online or in academic publications complaining about professionalization and what it has meant for Political Science.  The basic idea is that things were great before people became focused on stuff like citation counts, which has led to all kinds of perverse incentives.  The main complaint, it seems, is that scholars will try to game citations and this will force them into bad habits and away from good work, like thinking big thoughts (grand theory).

To be sure, as John Regehr illustrated so well, academics are smart at gaming systems:

While the first one listed above led me to an argument with Patrick Jackson, a former Duck-ster, on twitter last week, and the third led to way too many grant applications at my first tenure track job when the university became obsessed  rankings based on grants, today I am focused on the second: maxing citations.

Specifically, the argument is that there is more quantitative work than there used to be in part because such work is more cited (and, implicitly or explicitly, easier to do quickly and in volume–but those are myths to bust on another day).*  How do we know this?  Well, we don’t or we didn’t.  It has been a folk belief, spread by those who are nostalgic for the good old days.**

As it turns out, by using the data produced by the kindly folks at the College of William and Mary’s TRIP (Teaching, Research, and International Policy Project), I was able to do some modest tests (yes, testing hypotheses, oooooo!) of their datasets that code articles and include indicators for citation counts.  I did find a very slight positive and quite significant correlation between work that is quantitative and higher citation counts: correlation of .06, p < .001.  I wonder if this is enough to cause much of the discipline to learn statistical techniques, to learn STATA and now R, and to learn LaTeX so that one can be super-hip?  Alas, for those who think citation counts direct people towards quantitative work rather than grand theory or theory development, the correlation between work that pushes a paradigm forward (articles that are both pure theory and focus on a paradigm) is also significant: .08 and p <.0001.  Of course, the coefficients are low enough that it is unlikely that anyone will go “aha! I need to do x kind of work because it will lead to mountains of citations.”

Perhaps perceptions of how this stuff works has meant that people foolishly chose to do quantitative work based on a belief about the professional benefits, more cites, of doing quantitative work.  So, let this post (and the eventual article if it makes through R&R) free the shackles tying scholars to their datasets and stats packages!  Or, more realistically, let’s just keep doing the work that folks prefer to do, as they seek to answer the questions they ask in the ways they want to ask them.  

Oh, and about those pesky citation counts?  Yes, they are problematic.  Citation cartels can develop, folks can try to figure out how to game the system, and all the rest, but any system can produce perverse incentives (see table above).  Thus, I agree with Winston Churchill: “citation counts are the worst way to measure academic performance except for all of the others.”

*  Those who argue that working with data sets is easy have probably never worked with datasets.  While my most cited piece is a quantitative one, my grant projects that have been the most, um, fruitless or unproductive, have been ones that have attempted to gather data.  It can be really, really hard, and it can lead to findings that are not very interesting or even publishable.  That there is more quant work now speaks perhaps to the pressures of the job market but also due to improved technology–powerful computers, machine-coding of data, online sources of data and the rest that facilitate such work.  It probably is not due to the evils of defense spending (sorry, Ido).  
**  As Billy Joel once said, the good old days weren’t always so good and tomorrow ain’t as bad as it seems.  How did things work in the old days?  Ido Oren approvingly cites how Waltz and Keohane dominated how jobs were allocated.  Mearsheimer and Walt dropped their dismay at the demise of the Old Boys Network in the published version of their piece.  But the nostalgia in these pieces for the days before professionalization always makes me a bit nauseous, as the old system was far from swell.