Tag: journalism

Hey, NYT, we need to talk.

I know, democracy dies in darkness (sorry, WashPo put it better) and we need good journalism, but what you publish in the Opinion Section often does not qualify as journalism, like, at all. I am not even talking about “Intellectual Dark Web” (which is neither intellectual, nor dark, but maybe web) or blatant climate denialism; you seriously need a Russia bullshit detector. Because so far, Russia articles are mostly botched Cyrillic wrapped in a cliché inside an Orientalist talking point.

The latest “scary Putin/racist nonsense/KGB/italicized Russian words” piece grazed your pages yesterday and it already caused the Russia Twitter to eye roll off our couches. For starters, who knew that there is a Russian word for “lies”. Like, really. If you actually spent time in St. Petersburg or Moscow (because those authors never go further out in fear of bears and balalaikas) you would know that “vranyo” is hardly a word that would ever be used in the context of whatever “active measures” you are talking about. Which are, by the way, not a thing, as well as the “Gerasimov doctrine”.

So, what’s with the “corruption DNA”, people? Last time I checked, 23andMe doesn’t offer a breakdown on social vices. When the author talked to Volodya back in the 90s, did he also take some of his genetic material? Or tested every single Russian out there? Who counts as a Russian? Just the Russian-speakers or the ones who live in Russia proper? Do you get the corruption DNA if you have a baby with a Western person? So many questions, and, sadly, no bigotry-free responses.

And what’s with the menacing pictures of Putin? At least when I write my posts I preface them with some presidential wardrobe malfunction action, aka the executive nipples. Yes, Putin is watching you, but so is PRISM and that one has way more capabilities and potential for abuse than its Russian analogue. So, Volodya (at least, it’s not Vlad) was fine when he helped you get rich and had beer with you in the 90s but not anymore? As Maxim Edwards remarked, it’s unclear why the contingent of “I made a killing in the nineties and then it went to shit” still needs to be heard. Even though there are a couple of valid points in the piece, they are overshadowed by racism and conceit of a “civilized” Western man braving the borderless wasteland that is Russia and trying to advertise his company that “recovers assets”.

I have to wrap this up before my head explodes from the uncontrollable rage at the stupidity and arrogance of some of your contributors. Do better next time.

Data is not the problem. Data is not the solution.

Data is not good government.
Even when it wears a green eyeshade.

The Wonkblog view of the world presumes that social problems should be met with policy solutions, and that the best way to analyze policies is to have better data. To an extent, I agree.  All else being true, better data does make for better policy.

But that is a trivial conclusion. Politics is not policy. Indeed, data isn’t politics either. And what Wonkblog provides is frequently an inaccurate guide to all three.

The biggest problem with the Wonkblog attitude is its unthinkingly technocratic approach to everything. This occasionally takes the form of Ezra Klein’s complaints that the 2012 presidential race has featured an insufficient attention to policy detail. His desire for a more substantive political debate sits uneasily with his recognition of the current thinking in American politics that economic factors and military casualties–“the fundamentals”–determine most of the variance in presidential vote share, leaving little room for deliberation. Indeed, the voters who are left to be persuaded are often incredibly uninformed. This has been aptly summed by by fellow blogger Matt Yglesias in the Yglesias Paradox:

“I care so little about policy that I can’t form consistent partisan preferences, but I’m open to being persuaded by specifics.” — nobody

Moreover, the detail-driven Wonkblog Weltanschauung often leads Klein astray, as in his stunning assertion that “At the heart of the debate over `the 47 percent’ is an awful abuse of tax data.” Such a misreading of Romney can only come from a worldview in which the number “the 47 percent” is more important than the patently conservative American notion that poor people don’t take responsibility for themselves–a view that would exist regardless of whether the “true” figure were 47 percent, 4.7 percent, or 97 percent of the population.

Yet if Wonkblog often gets it wrong on a philosophical level, it is also often wrong in the details–in large part because they, like all journalists, want to tell a good story. This is not a huge problem for narrative journalists–there are good stories out there, even if it takes a real master to craft them well–but it is a giant problem when the “sources” on which your story relies are academics and journal articles, which are often persuasive individually but far less coherent corporately.

As Mike Paarlberg patiently explains, Wonkblog blogger Dylan Matthews has gotten confused several times reporting on a tangled but important question: whether teachers’ membership in unions affect student achievement. This is high-stakes research, and anyone familiar with social science will be unsurprised that the literature on the question is accordingly full of contradictory conclusions, often involving clever but perhaps flimsy uses of instrumental variables or exogenous treatments and more straightforward but potentially completely backward use of standard observational data. In other words, there’s an academic debate on the issue, as there typically is on any major issue in social life.

Matthews, following a dismayingly common trend in the blogosphere (and in the print commentariat in the epoch before that), does not report that controversy. Instead, he seizes on a grab-bag of articles and working papers to make his reports. We are here engaged in a pundit’s version of the game “Simon Says.” Here, though, it’s not the omnipotent Simon telling us what to do, but rather Science. The fact that Matthews plainly has not read the articles he describes in the posts Paarlberg critiques as carefully as he should (caveat: I tried to read them, and found them difficult to follow too) indicates that he is playing the much more sinister game Science says.

The rules of the game are well known, especially if you’re a policy entrepreneur.

  • Should we “nudge” people to pay higher pensions contributions? Science says: “Yes.”
  • Will free trade make workers in the U.S.A. better off? Science says: “Yes.” 
  • Should we attack/refrain from attacking Iran if it gets/gives up nuclear weapons? Science says: “Yes.” And science also says “No.” 

You begin to see why “Science says” is such an attractive game, but also one that’s a little less fun than “Simon says.”

I want to be perfectly clear here. The “science” in “Science says” is not that actual, tangled, occasionally confusing and usually contradictory thing that we practice as researchers in the social sciences. Quite the opposite: It’s usually the most persuasive (to some audience) or the most attractive (to some audience) argument that our debates have produced. Because the argument has been honed to a fare-thee-well in faculty debates, it has the same highly polished character that rocks put through a tumbler also display. But that doesn’t meant that it’s right, and it doesn’t mean that it’s a fair representation. Just because an argument comes packaged with great data and a persuasive, well-written blog post doesn’t mean it’s the one that journalists and pundits should cite.

Of course, quite often, it is. But the faculties necessary to make those judgments are precisely the ones that other academics are trained to make (one hopes) and not the ones that journalists have any particular comparative advantage over any other comparably educated profession.

Hence, the distinction between semi-curated blogs run by political scientists, which should do much better, and outlets like Wonkblog, which are run by journalists who do a better job at incorporating social science work but which are not, in the end, run by people who have the full training necessary to evaluate the work being produced. This is, in a sense, why it’s a good idea for social scientists to write less and read more–and also to think about how the discipline as a whole should communicate its findings and the content of its debates. (Note that the journalists who cover the hard sciences are better, but not perfect, in this regard.) What we do is important, but it’s easy to misinterpret.

Correction: This post originally, and briefly, read “Wonkbook” for “Wonkblog.”

Late Addendum: If you are basing your claims to expertise on interpreting stats for a mass audience, make sure that you can interpret stats.

© 2019 Duck of Minerva

Theme by Anders NorenUp ↑