Because not everyone reads comment threads, in part because of the way that people engage with The Duck via RSS readers, and because think the questions involved are really important ones, I’m going to post my reaction to PM’s “Yes, I do envy physicists”as a separate post of its own:

Man, I was right with you until your advance response to commenters. Making “data and its analysis central to the undergraduate experience” — a.k.a. emphasizing undergraduate research, such that one of the primary learning outcomes of a BA in International Relations or Government or Political Science or whatever is the critical intellectual disposition necessary to be both an intelligent producer of knowledge about the social and political world and an intelligent consumer of other knowledge-claims about that world — is spot-on. (And part of why one of the first administrative changes I made as Associate Dean in my school is to establish the position of Undergraduate Research Coordinator, whose job is both to coordinate our methodological course offerings and to make sure that upper-division classes feature opportunities to actually use those techniques in research projects as appropriate.) Now, you and I (probably) disagree about the relative prominence of statistical training in the enterprise of undergraduate research, since as you know I am a lot more small-c-catholic about (social-)scientific methodology than, well, most people. But hey, we’re in the same basic place…

…and then you had to go and diss history and theory. This is counterproductive for at least three reasons:

1) one can’t do good research without both theory and methodology, and the point of the exercise is to help people learn how to do good research, not how to use methodological tools in isolation.

2) de-emphasizing history and theory at the undergraduate level basically guarantees that “re-emphasizing” it at the graduate level ain’t never gonna happen. Teched-up statisticians going to graduate programs aren’t likely to willingly seek out unfamiliar ways of thinking about knowledge-production, and let’s be honest, theory — whatever your favorite flavor of theory — isn’t like methodology in general and isn’t like statistical-comparative methodology of the quantitative kind in particular. So you’ll either get a) statisticians launching smash-and-grab raids on history and theory for a justificatory fig-leaf for their operational definitions of variables and for supposedly “objective” data to use in testing their hypotheses (hey, wait a second, that sounds familiar…oh yeah, it’s what “mainstream U.S. PoliSci” does ALL THE FRAKKING TIME ALREADY); or b) existential crises when students discover that everything they learned in undergrad — I am referring to the “hidden curriculum” here, the conclusions that students will draw from the emphasis on statistics and the de-emphasis on history and theory — is wrong or at least seriously incomplete. Then you factor in the professional incentives for publication in “top-tier” US journals, and the lack of ability to meaningfully evaluate non-statistical work if one hasn’t spent some serious time training in how do appreciate that work, and you get…well, you get basically what we have at the moment in US PoliSci, but worse.

3) since we’re social scientists and not statisticians (or discourse analysts, or ethnographers, or surveyors, or…), methodology is a means to an end, and that end is or should always be the explanation of stuff in the social world. A social scientist teaching stats should be teaching about how one uses stats to make sense of the social world; ditto a social scientist teaching whatever methodology or technique one is teaching. Yes, the disciplinary specialists in those tools are not going to be particularly pleased with everything that we do, but that’s okay, since we’re on a different mission. And that mission necessitates history and theory just as much as it necessitates methodology (and, I would argue, a broad and diverse set of methodological literacies). If one tries to play the game where one looks for external validation of one’s methodological chops by people whose discipline specializes in a particular set of tools, then one is probably going to lose, or one is going to be dismissed as derivative. We’re not about to locate the Higgs boson with anything we do in the social sciences, and we’re not likely to contribute to any other discipline (I mean, it happens, but I think the frequency is pretty low). What we are going to do, or at least keep on trying to do, is to enhance our understanding of the social world. More stats training — more methodology training of any sort — at the undergraduate level is not necessarily a means to that end, unless it occurs in conjunction with more history and theory.

None of this is going to help the public understand what we do any better. We don’t make nuclear bombs or cel phones or (un)employment, and the U.S. is kind a a dispositionally anti-intellectual place (has been since the founding of the country…see Tocqueville, Hofstader, etc.) theory isn’t respected as a contribution. Everybody wants results that they can easily see — can you build a better mousetrap — and the vague sense that physicists have something to do with engineers and economists have something to do with entrepreneurs (who are, I think, the actual figures that get public prestige, because they do practical stuff) shores up their respective social value. But us, what we have a vague connection to are POLITICIANS, and everybody hates them. So that’s an uphill battle we’re probably fated to lose. So my punchline, which I’ve given many times before: our primary job is teaching students, our scholarship makes us better teachers, and the place to point for evidence of our social value is to those who graduate from our colleges and universities and the people they’ve become as a result of dwelling for a time in the happily intellectual and critical environment we contribute to producing on campus.

Share