Researchers associated with the Human Security Report Project have a new article in the Journal of Conflict Resolution contradicting a recent critique of corpse-counting techniques prevalent in the battle-deaths community. The original critique, authored by Obermayer, Murray and Gakidou (humorously referred to as OMG by the authors of this new rebuttal) compared war death reports from the World Health Organization‘s sibling survey data with battle-death estimates for the same countries from the International Peace Research Institute in Oslo (PRIO) and concluded that battle-death estimating methods (which draw on incident reports by third party observers) significantly undercount war deaths because so many go unreported. Surveys, OMG argued, constitute a better measure of the death toll of war; and they conclude that in the wider survey data there is less support for the widely reported global decline in war deaths.

Michael Spagat, Andrew Mack and their collaborators point out a few errors and inconsistencies in the comparison drawn by “OMG.” The most damning of these is that the data are non-comparable, since the PRIO dataset is measuring “battle-deaths” (soldiers killed and civilians killed in the crossfire in two-sided wars where a government is one of the parties) whereas the WHO dataset is measuring all “war deaths” as reported by conflict-affected populations. So the most OMG can say is battle-death estimating methods undercount war deaths because they aren’t counting war deaths. Maybe they have a point. Actually I think both sets of indicators – as well as the labels we assign to them – are subject to critique, and I’ve said so elsewhere.

However regardless of how we define which corpses to count and what to call them, what ought to be at issue here is how best to arrive at valid estimates. Suppose OMG’s original findings were true, and suppose both datasets were actually trying to measure the same thing. Does this mean surveys are a more accurate measure than incident reporting, or simply that both measures are inaccurate in different directions? I can imagine that surveys would result in significant over-reporting, just as I find it plausible that incident reports report-based estimation methods may miss some data. I am no number-cruncher, but if I were constructing a casualty dataset for specific wars, I expect I would want to take the average of the two estimates. So this debate over which methodology is more accurate strikes me as a slightly misplaced.

Share