Peter Campbell and Michael Desch write in Foreign Affairs that the National Research Council’s rankings of political science departments are systematically biased against international relations scholarship and against policy-relevant scholarship:
The NRC’s methodology biased its rankings against two kinds of scholarship: international relations scholarship, which is often book-oriented; and policy-relevant scholarship, which often appears in non-peer-reviewed journals. That leads to vast undervaluation of many leading scholars and, accordingly, their schools. As an illustration, the late international relations scholar Kenneth Waltz’s hugely influential book Theory of International Politics constitutes 64 percent of his enormous count of nearly 5,000 Social Science Citation Index hits, the most commonly used indicator of scholarly impact. To exclude his books from any ranking of impact is hard to justify. It also discourages ranked programs from promoting authorship of in-depth policy relevant work.
Given policy schools’ focus on graduate teaching and Ph.D. supervision, the NRC rankings also discounted the achievements of scholars there, or scholars who teach only undergraduates. That means that even excellent international relations scholars at, for example, undergraduate colleges such as Dartmouth, and at master’s programs such as Harvard’s Kennedy School, Johns Hopkins’ SAIS, Georgetown’s Walsh School, and George Washington’s Eliot School, get left out in the cold. In turn, that gives universities little incentive to invest in policy scholarship, policy school teaching, or undergraduate teaching. That is unfortunate because many of the leading scholars of international relations now teach at such schools.
Further, despite giving credit for interdisciplinary faculty, the NRC’s ranking of disciplinary departments ignores interdisciplinary programs, supposedly the academic wave of the future. This discourages universities from nurturing those departments, instead fostering ever more specialized scholarship conducted in isolated intellectual silos. Such siloing only reinforces the strictly disciplinary orientation of scholarship and steers scholars away from work that does not fall neatly within one scholarly approach.
The essay overviews the methodology the NRC used, and then compares it to Campbell and Desch’s own ranking system which corrects for these biases. Their findings:Â
When scores are updated based on publications in scholarly international relations journals, such as International Security, International Organization, and World Politics, Princeton, Stanford, MIT, Harvard, Ohio State, Chicago, Columbia, and UC San Diego move up the ladder (IR Ex h-index). Schools such as Georgetown, which have a substantial policy focus, also fare better.
When books are added to the equation, the rankings are scrambled once again, with Brown capturing the top spot, joined in the top ten for the first time by Indiana Bloomington, Santa Barbara, UC Berkeley, and Cornell.
One cut at trying to rank programs by the presence of their faculty in non-academic publications involved factoring in Foreign Affairs and Foreign Policyarticles. In the first case, new entrants into the top ten include Georgetown, Johns Hopkins, and Virginia. In the second case, institutions such as Michigan State and Pennsylvania move up to join Harvard, Princeton, Georgetown, Stanford, and Columbia.
A slightly different measure of policy relevance was a ranking of IR programs by the number of their faculty who have won the Council on Foreign Relations’ International Affairs Fellowships, which allow them to serve for a year in a government position. Berkeley leads in this category.
Finally, it is possible to rank the relevance of political science departments based upon the number of times their faculty testify before Congress, controlling for the size of the department. Here, UC Santa Barbara comes out on top, followed by Georgetown, Virginia, Maryland, Columbia, George Washington, UC Irvine, MIT, Stanford, and Princeton.
Simply put, when you rank political science departments by disciplinary, subfield, and broader relevance criteria, you get very different results. Given that, we believe that broader criteria of scholarly excellence and relevance ought to be part of how all departments are ranked. We are not advocating junking traditional criteria for academic rankings; rather, we urge that such narrow and disciplinarily focused criteria simply be balanced with some consideration of the unique aspects of international relations and also take account of the broader impact of scholarly work.
I’m not sure that excluding subfield specific journals means the NRC rankings are “systemtically biased” against IR as a field since they would also be systematically biased against other fields in the same way. However the argument about policy relevance and siloing is very interesting – and probably applicable not just to IR but many other fields in political science. Check out Campbell and Desch’s dataset here.
In the UK, we account for this in the REF with Impact which accounts for 20 percent of the ranking. But I reject the idea that you have to choose between policy and academic scholarship. Many of us, especially on the Duck, do both.
Certain other fields or subfields are similarly disadvantaged by ignoring books. Political theory is heavily book-oriented, for instance. So is the subfield of APD which communicates its most important ideas in book form.
I agree that books should be counted. They are an essential part of political science. I also agree that ignoring interdisciplinary work is highly problematic, as it prevents our discipline from contributing to the solution of highly complex problems. However, including non-peer reviewed journals in the count would be wrong. If policy research is conducted rigorously, there are many peer-reviewed outlets for it. Foreign Affairs can then publish an edited summary for non-scientific audiences.
I am not impressed https://saideman.blogspot.ca/2013/09/rank-confusion.html