There is much gnashing about citations of late.  This tweet inspired the ensuing spew below:

But also this series of posts at the Monkey Cage last week on gender bias in citations (the link points to the final piece in the series, so it has links to the rest of the posts) raises questions about using citations as a metric of success.   If the numbers are problematic, what should we do?

Let’s move back a step.  Citation counts are not everything.  Yes, folks do think about them when hiring, tenuring and promoting people, but all of these processes involve other stuff besides saying person x has more citations than person y.  Using multiple sources of citation counts also reduces bias a bit.  The old Social Science Citation Index entirely ignored books, whereas scholar.google.com (and the resulting handy tool: Publish or Perish) does include books to some degree and other forms of publications (also it includes unpublished papers, which may be problematic as well, but do help junior faculty who may not have much time to wait while their stuff gets cited in things that are eventually published).  Of course, that raises questions about what should count, but that is better than not asking that question (which is what folks did in the old days of SSCI).

The next challenge is that the numbers can be gamed a bit.  One can write articles that are citation bait–that are aimed at producing a counter-reaction rather than advancing our understanding.  Some folks would call these people trolls.  It is also hard to compare the numbers since some debates get so played up and have great citation circles that citations accumulate much more so than in other areas.

So, should we drop the focus on citation counts and return to the old ways as some might suggest?  I once again find myself invoking Churchill–citation counts may be the worst form of academic comparison except for all the others.  Prior to citation counts in our professionalizing profession, folks relied on what?  Well, some relied on old boys networks. Rather than focusing on counts of articles and counts of citations, one could simply rely on what friends tell one about who is worthy in the field.

Yes, citation counts are problematic, but given that they are numbers that we can examine, weight, modify, correct, include in ranking algorithms, re-code and what not, we can actually do some serious thinking and evaluating about them.  Can we do the same using the old strategies of relying on networking behind closed doors?  Also, there is clearly gender bias in citations, as demonstrated in various places, but I am pretty sure that Old Boys Networks might have that problem yet more so.

To be clear, we can do better than citation counts but we have done far worse.

Share