Data is not good government. Even when it wears a green eyeshade. |
The Wonkblog view of the world presumes that social problems should be met with policy solutions, and that the best way to analyze policies is to have better data. To an extent, I agree. All else being true, better data does make for better policy.
But that is a trivial conclusion. Politics is not policy. Indeed, data isn’t politics either. And what Wonkblog provides is frequently an inaccurate guide to all three.
The biggest problem with the Wonkblog attitude is its unthinkingly technocratic approach to everything. This occasionally takes the form of Ezra Klein’s complaints that the 2012 presidential race has featured an insufficient attention to policy detail. His desire for a more substantive political debate sits uneasily with his recognition of the current thinking in American politics that economic factors and military casualties–“the fundamentals”–determine most of the variance in presidential vote share, leaving little room for deliberation. Indeed, the voters who are left to be persuaded are often incredibly uninformed. This has been aptly summed by by fellow blogger Matt Yglesias in the Yglesias Paradox:
“I care so little about policy that I can’t form consistent partisan preferences, but I’m open to being persuaded by specifics.” — nobody
Moreover, the detail-driven Wonkblog Weltanschauung often leads Klein astray, as in his stunning assertion that “At the heart of the debate over `the 47 percent’ is an awful abuse of tax data.” Such a misreading of Romney can only come from a worldview in which the number “the 47 percent” is more important than the patently conservative American notion that poor people don’t take responsibility for themselves–a view that would exist regardless of whether the “true” figure were 47 percent, 4.7 percent, or 97 percent of the population.
Yet if Wonkblog often gets it wrong on a philosophical level, it is also often wrong in the details–in large part because they, like all journalists, want to tell a good story. This is not a huge problem for narrative journalists–there are good stories out there, even if it takes a real master to craft them well–but it is a giant problem when the “sources” on which your story relies are academics and journal articles, which are often persuasive individually but far less coherent corporately.
As Mike Paarlberg patiently explains, Wonkblog blogger Dylan Matthews has gotten confused several times reporting on a tangled but important question: whether teachers’ membership in unions affect student achievement. This is high-stakes research, and anyone familiar with social science will be unsurprised that the literature on the question is accordingly full of contradictory conclusions, often involving clever but perhaps flimsy uses of instrumental variables or exogenous treatments and more straightforward but potentially completely backward use of standard observational data. In other words, there’s an academic debate on the issue, as there typically is on any major issue in social life.
Matthews, following a dismayingly common trend in the blogosphere (and in the print commentariat in the epoch before that), does not report that controversy. Instead, he seizes on a grab-bag of articles and working papers to make his reports. We are here engaged in a pundit’s version of the game “Simon Says.” Here, though, it’s not the omnipotent Simon telling us what to do, but rather Science. The fact that Matthews plainly has not read the articles he describes in the posts Paarlberg critiques as carefully as he should (caveat: I tried to read them, and found them difficult to follow too) indicates that he is playing the much more sinister game Science says.
The rules of the game are well known, especially if you’re a policy entrepreneur.
- Should we “nudge” people to pay higher pensions contributions? Science says: “Yes.”
- Will free trade make workers in the U.S.A. better off? Science says: “Yes.”Â
- Should we attack/refrain from attacking Iran if it gets/gives up nuclear weapons? Science says: “Yes.” And science also says “No.”Â
You begin to see why “Science says” is such an attractive game, but also one that’s a little less fun than “Simon says.”
I want to be perfectly clear here. The “science” in “Science says” is not that actual, tangled, occasionally confusing and usually contradictory thing that we practice as researchers in the social sciences. Quite the opposite: It’s usually the most persuasive (to some audience) or the most attractive (to some audience) argument that our debates have produced. Because the argument has been honed to a fare-thee-well in faculty debates, it has the same highly polished character that rocks put through a tumbler also display. But that doesn’t meant that it’s right, and it doesn’t mean that it’s a fair representation. Just because an argument comes packaged with great data and a persuasive, well-written blog post doesn’t mean it’s the one that journalists and pundits should cite.
Of course, quite often, it is. But the faculties necessary to make those judgments are precisely the ones that other academics are trained to make (one hopes) and not the ones that journalists have any particular comparative advantage over any other comparably educated profession.
Hence, the distinction between semi-curated blogs run by political scientists, which should do much better, and outlets like Wonkblog, which are run by journalists who do a better job at incorporating social science work but which are not, in the end, run by people who have the full training necessary to evaluate the work being produced. This is, in a sense, why it’s a good idea for social scientists to write less and read more–and also to think about how the discipline as a whole should communicate its findings and the content of its debates. (Note that the journalists who cover the hard sciences are better, but not perfect, in this regard.) What we do is important, but it’s easy to misinterpret.
Correction: This post originally, and briefly, read “Wonkbook” for “Wonkblog.”
Late Addendum: If you are basing your claims to expertise on interpreting stats for a mass audience, make sure that you can interpret stats.
I don’t read Wonkblog often enough to weigh in on the overall pt here, though I think you’re probably right on the technocratic approach — it is called Wonkblog after all.
I would take some issue, however, with the sentence where you say that Klein’s desire for a more specific debate on policy sits uneasily w his recognition that the “fundamentals” (state of the economy, casualties) drive election results. This is arguably apples and oranges. One can want a more specific policy debate b.c it might have educational value, informing voters and elevating the tone of the campaign, even if in the end it’s not going to change many or even any votes. The problem w relentlessly emphasizing the pol sci research on the “fundamentals,” as the Monkey Cage does (and Klein reads TMC), is that is can leave an impression of “why are we bothering to have an election or debates? Let’s just measure the fundamentals, vote, and have done w it.” It could leave this impression, but it doesn’t logically follow b/c as Sides himself would presumably concede, there is more to an election than who wins — or, at least, there should be. In some countries campaigns are probably still airing grounds both for policy debates of some specificity and for more basic philosophical debates of some sophistication, and while that hasn’t been true in the US for a long time (if ever), that doesn’t mean it’s not a goal worth striving for, however hopeless it might seem.
Note: I wrote the above when I thought the post ended midway through (screen display issues). I’ve now read the whole thing. But I don’t think I need to revise the above comment.