Tag: social science (page 1 of 2)

Friday Nerd Blogging: Winter is Coming to #ISA2015

More information about the genesis of this panel here. Paper abstracts here. Continue reading

Share

The Interdisciplinarity Shibboleth: Christakis Edition

Andrew Gelman provides a nice rejoinder to Nicholas Christakis’ New York Times op-ed, “Let’s Shake up the Social Sciences.” Fabio Rojas scores the exchange for Christakis, but his commentators provide convincing rebuttals to Rojas. Once again, I suspect reactions to the column are driven by homophily rather than network effects. But all this aside, Christakis makes an interesting claim about the evidence for stagnation:

In contrast, the social sciences have stagnated. They offer essentially the same set of academic departments and disciplines that they have for nearly 100 years: sociology, economics, anthropology, psychology and political science. This is not only boring but also counterproductive, constraining engagement with the scientific cutting edge and stifling the creation of new and useful knowledge. Such inertia reflects an unnecessary insecurity and conservatism, and helps explain why the social sciences don’t enjoy the same prestige as the natural sciences.

Instead, we should provide more funding for people like Christakis create departments that reflect the “cutting edge” of interdisciplinarity:

It is time to create new social science departments that reflect the breadth and complexity of the problems we face as well as the novelty of 21st-century science. These would include departments of biosocial science, network science, neuroeconomics, behavioral genetics and computational social science. Eventually, these departments would themselves be dismantled or transmuted as science continues to advance.

Some recent examples offer a glimpse of the potential. At Yale, the Jackson Institute for Global Affairs applies diverse social sciences to the study of international issues and offers a new major. At Harvard, the sub-discipline of physical anthropology, which increasingly relies on modern genetics, was hived off the anthropology department to make the department of human evolutionary biology. Still, such efforts are generally more like herds splitting up than like new species emerging. We have not yet changed the basic DNA of the social sciences. Failure to do so might even result in having the natural sciences co-opt topics rightly and beneficially in the purview of the social sciences.

Continue reading

Share

It’s Not About #PoliSci, It’s About the #NSF

One point that I’d like to see made a little bit more clearly is that political scientists should try to reframe this. I doubt that we have much sympathy among members of other disciplines; that quote about “first they came for the X” is troubling precisely because, well, nobody stands up for the Xs as Xs. Besides, academics don’t have much sympathy for anyone outside of their discipline: would political scientists rally behind a struggling Anthropology? And the jerks at Freakonomics encouraged their readers to support icing both poli sci and sociology, so I doubt we can count on much deep help from the economists.

However.

If there’s one thing we can do, it’s to point out that there is a risk that targeting poli sci could lead to an actual domino theory. Not so much in the Coburn-is-coming-for-you-next sense—my guess is that Dr. Coburn (R., Latveria) is not, actually, all that incensed by NSF funding for economists—but in the sense that Congress shouldn’t dictate the inner workings of the NSF on anything. If it’s not Coburn targeting economists, maybe it’s Rand Paul requiring the NSF to only sponsor non-Keynesian economics research. Or Jeff Flake banning research into evolution—or a requirement that all geological research consider the null hypothesis that the earth is 4,000 years old.

Continue reading

Share

What Exactly is the Social Science Citation Index Anyway?

jcr_medbner_availnow

Yeah, I don’t really know either. I always hear the expression ‘SSCI’ thrown around as the gold standard for social science work. Administrators seem to love it, but where it comes from and how it gets compiled I don’t really understand. Given that we all seem to use this language and worry about impact factor all the time, I thought I would simply post the list of journals for IR ranked by impact factor (after the break).

I don’t think I ever actually saw this list before all laid out completely. In grad school, I just had a vague idea that I was supposed to send my stuff to the same journals whose articles I was reading in class. But given that I haven’t found this list posted on the internet anywhere, here it is. I don’t know if that means it is gated or something, or if my school has a subscription, or whatever. Anyway, I thought posting the whole IR list would be helpful for the Duck readership.

But I have a few questions. First, why does Thomson-Reuters create this? Why don’t we do it? Does anyone actually know what they do that qualifies them for this ? And don’t say ‘consulting’ or ‘knowledge services’ or that sort of MBA-speak. The picture above includes some modernist, high-tech skyscraper, presumably to suggest that lots of brilliant, hi-tech theorists are in there crunching away big numbers (but the flower tells you they have a soft side too – ahh), but I don’t buy it. Are these guys former academics who know what we read? Who are they? Does anyone know? The T-R website tells you nothing beyond buzzwords like ‘the knowledge effect’ and ‘synergy.’ I am genuinely curious how T-R got this gig and why we listen to them. Why don’t we make our own list?

Anyway, I don’t really know, so I just thought I’d throw it out there. Check the IR rankings below.

More questions:

I am not sure if the SSCI and the Journal Citation Reports from T-R are different or not or what. Click here to see the SSCI list; and here is the JCR link, which is probably gated, but ask your administration; they probably have access. There are 3038 journals in the whole SSCI list (!), 107 listed under political science, and 82 under IR. There is some overlap between the last two, but the PS list does not completely subsume the IR list, as I think most of us would think it should. For example, IS is listed only under IR, not political science, but ISQ is listed under both, even though I think most people would say IS is a better journal than ISQ. Also, there is no identifiable list for the other 3 subfields of political science. I find that very unhelpful. More generally, I would like to know how T-R chooses which journals are on the SSCI and which not. It doesn’t take much effort to see that they’re almost all published in English…

Next, I thought the SSCI was only peer-reviewed, but Foreign Affairs and the Washington Quarterly (which I understand to be solicited, not actually peer-reviewed – correct me if I am wrong) are listed on the IR list, and even Commentary and the Nation magazine are on the PS list. Wow – your neocon ideological ravings can actually count as scholarship. Obviously FA should be ranked for impact factor; it’s hugely influential. But does it belong on the SSCI? Note also that ISR is listed on the IR roster, as is its old incarnation, the Mershon ISR. Hasn’t that been gone now for more than a decade? Also when you access the impact factors (below),T-R provides an IR list with its ‘Journal Citation Reports’ that has only 78 journals listed for IR, not 82. So the SSCI for IR (82) does not quite equal the JCR for IR (78). Is that just a clerical error? If so, does that mean the super-geniuses in the futuristic skyscraper are spending too much time looking out the windows at the flowers? I guess if you double-count M/ISR, you get 79, which is pretty close to 82, but given how definitive this list is supposed to be, it seems like there are problems and confusions.

2010 is the most recent year T-R provides a ranking, so I used that, plus the rolling 5-year impact factor. The ranking on the left follows the 5 year impact factor, not the 2010 one.

A few things leap out to me:

1. How did International Studies Perspectives rocket up so high in less than 15 years, higher than EJIR, RIPE, and Foreign Affairs? Wow. I guess I should read it more.

2. What is Marine Policy (no. 11) and how did it get so very high also?

3. Security Studies at 27 doesn’t sound right to me. We read that all the time in grad school.

4. A lot of the newest ones, at the bottom without a 5-year ranking, come from Asia. That isn’t surprising, as Asian countries are throwing more and more money at universities. That’s probably healthy in terms of field-range, to move beyond just Western-published ones.

5. Why haven’t I ever even heard of something like half of these journals? I guess we really are a hermeneutic circle – reading just the same journals again and again – APSR, IO, IS, ISQ, EJIR. That’s pretty scholastic when this IR SSCI list shows a rather interesting diversity I never have time to read. A shame actually…

Rank                 Title                          2010 Impact Factor      5-Year Impact Factor

clip_image002[15]

1

111

INT ORGAN 3.551 5.059
2 INT SECURITY 3.444 4.214
3 WORLD POLIT 2.889 3.903
4 J CONFLICT RESOLUT 1.883 3.165
5 INT STUD QUART 1.523 2.427
6 INT STUD PERSPECT 0.719 2.344
7 EUR J INT RELAT 1.426 2.337
8 FOREIGN AFF 2.557 2.263
9 COMMON MKT LAW REV 2.194 2.071
10 J PEACE RES 1.476 2.036
11 MAR POLICY 2.053 1.961
12 INT J TRANSIT JUST 1.756 1.923
13 INT RELAT 0.473 1.743
14 JCMS-J COMMON MARK S 1.274 1.643
15 INT STUD REV 0.803 1.621
16 REV INT POLIT ECON 0.861 1.519
17 SECUR DIALOGUE 1.6 1.51
18 INT AFF 1.198 1.496
19 CONFLICT MANAG PEACE 0.682 1.423
19 EUR J INT LAW 1.5 1.423
21 WORLD ECON 0.878 1.382
22 STUD COMP INT DEV 0.605 1.352
23 BIOSECUR BIOTERROR 1.26 1.265
24 REV WORLD ECON 0.966 1.201
25 REV INT STUD 0.98 1.177
26 MILLENNIUM-J INT ST 0.727 1.084
27 SECUR STUD 0.766 1.065
28 FOREIGN POLICY ANAL 0.7 1.032
29 TERROR POLIT VIOLENC 0.814 0.946
30 AM J INT LAW 0.865 0.858
31 GLOBAL GOV 0.8 0.848
32 PAC REV 0.683 0.791
33 ALTERNATIVES 0.357 0.776
34 LAT AM POLIT SOC 0.34 0.731
35 STANFORD J INT LAW 0.6 0.727
36 WASH QUART 0.65 0.721
37 CORNELL INT LAW J 0.541 0.693
38 COLUMBIA J TRANS LAW 0.741 0.671
39 J JPN INT ECON 0.444 0.662
40 COMMUNIS POST-COMMUN 0.211 0.64
41 B ATOM SCI 1.057 0.632
42 INT INTERACT 0.258 0.622
43 SURVIVAL 0.472 0.615
44 EMERG MARK FINANC TR 0.444 0.558
45 INT J CONFL VIOLENCE 0.586 0.524
46 OCEAN DEV INT LAW 0.282 0.518
47 AUST J INT AFF 0.508 0.517
48 J STRATEGIC STUD 0.344 0.491
49 SPACE POLICY 0.308 0.381
50 MIDDLE EAST POLICY 0.219 0.309
51 ISSUES STUD 0.13 0.284
52 WAR HIST 0.265 0.262
53 KOREAN J DEF ANAL 0.304 0.261
54 CURR HIST 0.139 0.19
55 WORLD POLICY J 0.144 0.164
56 J MARIT LAW COMMER 0.244 0.15
57 INT POLITIK 0.017 0.042
58 INT POLIT-OSLO 0.013 0.024
59 ASIA EUR J 0.237
59 ASIAN J WTO INT HEAL 0.333
59 ASIAN PERSPECT-SEOUL 0.326
59 BRIT J POLIT INT REL 1.025
59 CAMB REV INT AFF 0.18
59 CHIN J INT LAW 0.206
59 COOP CONFL 0.868
59 INT POLITICS 0.564
59 INT RELAT ASIA-PAC 0.676
59 J HUM RIGHTS 0.34
59 J INT RELAT DEV 0.429
59 J WORLD TRADE 0.398
59 KOREA OBS 0.292
59 N KOREAN REV 0.75
59 PAC FOCUS 0.459
59 REV DERECHO COMUNITA 0.098
59 REV INT ORGAN 0.971
59 STUD CONFL TERROR 0.588
59 ULUSLAR ILISKILER 0.224
59 WORLD TRADE REV 1.231
Share

7 things I don’t like @ being an Academic

dork

This genre is growing on the Duck, so here are are a few more thoughts before you take the PhD plunge. Enjoy your last summer to read as you choose, without following a peer reviewer or a syllabus. Such lost bliss…

Generally speaking, yes, I like being an academic. I like ideas and reading. I like bloviating at length. The sun is my enemy, and exercise bores me. I would really like to be a good writer/researcher. Including grad school, I’ve been doing this now for 15 years, so clearly I could have switched. I am committed. But there are at least 7 things I didn’t see back in my 20s when I had romantic ideas that if I got a PhD, I’d be like Aristotle or John Stuart Mill – some great intellectual with real influence on, what a Straussnik once called to me, ‘the Conversation,’ which I took in my heady, pre-game theoretic youth to be this (swoon).

1. It’s lonely.

I didn’t really think about this one at all before going to grad school. In undergraduate and graduate coursework, you are always very busy and meeting lots of people. You live in a dorm or fun, near-campus housing, you have lots of classes, you hit the bars on the weekends, you go to department functions. Girlfriends/boyfriends come and go. So even if you didn’t like 9 of the 10 people you met, you were meeting so many, that you eventually carved out a circle and did fun stuff that kinda looked like the 20-something comedies you see on TV. But once you hit the dissertation, you are suddenly thrown back on your own, and you really re-connect, or try, with your family, because they’re the only ones who’ll put up with your stress. You spend way too much time at home, alone, in a room, staring hopelessly at a computer screen. You don’t really know what you’re doing, and your committee, while filled with good, smart people who are almost certainly your friends, can’t really do this for you, even though you try to push it off on them.

Then, when you get your job, you spend lots of time in your office or your home office, because the publication requirements are intense (or at least, they feel that way, because you still don’t really know what you’re doing). Maybe you do a joint paper, but the collective action problem strikes. Pretty soon, you spend lots of time, alone, with your office door shut. You eat lunch at your desk, and you read at night in your home office after dinner. It’s the only way to keep up (more on that below). Isn’t that a weird sort of existence that seems unhealthy given that ‘man is a social animal’? I remember at a conference once a few years ago, a colleague opened it by saying, ‘we like going to conferences, because we get lonely all day at work by ourselves.’ I’ve always remembered that remark for its sheer honesty. The room erupted in laughter and approval.

Sure I could meet people if I had cool hobbies like mountain climbing or biking, but how many academics do that? That’s…outdoors, and far too healthy. And who has time for that? I need to read 20 book and articles just for my r&r. I gotta spend my weekends reading, blogging, and chewing my fingernails in anxiety over the quality of my work. And the rest of my time goes into family. Sure, I could let myself get sucked into academic service to expand my circle, but how often have you seen academics trying to get out of service and such, in order to get back to their offices to research, alone?

2. It’s made me fat and squirrely.

Part of spending too much time by yourself, is letting yourself go. Groups helps socialize and discipline behavior, so if you’re sitting at home all day reading alone, why not just wear pajamas the whole time? Actually, this is probably worst in grad school when I recall lots of us thickened up because of the dramatic lifestyle change to sitting in a chair reading all day. If you’re not careful, it’s easy to fester, to become like Gollum living in your dissertation cave, obsessing over the precious as your nails get longer. You don’t shave enough; you write in your pajamas; you stop going to the gym. You probably start smoking. You eat crappy microwave meals and cereal for dinner, because you can bring the bowl easily to your workstation. When you do get a break, you binge drink too often. Your nails are now long enough that you really can climb the walls.

I’ve found this gets better later. I’m a lot better disciplined than 10 years ago. Marriage helps, if only because your spouse forces you out of the house when your pants stop fitting. She’ll force you to take a shower before checking your email in the morning, compel you to stop wearing the same clothes, tell you to shave more, and make you quit smoking. Students help too. Undergrads won’t respect you if you look like a furball TA, and they’re a helluva lot better dressed than you.

3. It’s made me hypersensitive to criticism.

I remember reading Walt somewhere saying that academics are very thin-skinned and hyper-sensitive.  I think I am too, although I am trying not to be. This is one reason I chose to blog; I thought it might toughen me up. But when reviewers and blog commenters criticize me, I inevitably take it the wrong way. It makes me nervous and skittish, as if maybe I’m a dilettante who got found out. (This is no plea for kid gloves, only an admission.) When I get rejection letters from academic journals, my hands shake (lame but true). I presume that means I am really insecure about my work, even though you’d think that would pass after 15 years. I think sometimes it’s because the only big thing I have in the professional world is my intellectual credibility. I have no big money, no cool DC or think-tank perch, no ‘network,’ no inside track to anything. The only reason anyone would even notice me is because I try to be a researcher who says stuff that can at least be verified somewhat. So I read at least an article of IR a day just out of anxiety. How’s that for job satisfaction?

Like everybody, I like being cited. It’s flattering. Andrew Sullivan has linked me twice, which sent thousands of people to my website. But honestly, it made almost as nervous as happy – all those people pulling apart my work, maybe thinking it was just crap. Perhaps I’m just new at this, but also I think this is an artefact of the way we are trained – to ruthlessly tear apart essays in our coursework, or to ask the preening, show-off question that knocks the conference speaker or job applicant off-balance (did you select on the dependent variable?) and makes us look clever and witty in front of our colleagues. Who hasn’t seen that kind of sarcasm at conferences, cutting, ‘I can’t believe you wrote that’ sort of analysis, ad hominem put-downs, most obviously on blogs? IR has never struck me as an especially polite, well-tempered field, more like a shark-tank. Ned Lebow once told me that IR grad school is like ‘bootcamp for your brain,’ and it’s really true that we’ve created a hypercompetitive atmosphere.

I understand why of course – US IR and other grad programs wouldn’t have the global reputations they do without it. And yes, I support it; quality control is growing issue in the Korean university system, because Korea sill lacks a major, globally ranked school. And of course, peer review is absolutely central to preserving quality and maintaining the line between us and journalism. But the tradeoffs are there – enervating and unnerrving, at least in my experience. I can’t imagine how Andrew Sullivan or Stephen Walt go to sleep at night when all those red-staters, e.g., think they are the antichrist or something. I’d be pacing the bedroom.

4. The money is weak given the hours we put in.

This one is a no-brainer. Social science is nothing if not totalist. If you don’t believe me, just go watch a movie or TV show with one, and watch her analyze it to death, draining all the fun away by endlessly interrupting to explain why the Transporter is really a commentary on traffic laws or gun control. (I’m guilty of this too.) My point is that we see our work all over the place. We think about ‘opportunity costs’ when we pick movies on date night, or ‘free riding’ when the check comes for dinner. I guess this is good in one way. It means we are using are hard-won education. But it also means that we are effectively working all the time. Even if we are reading for leisure, we will still take notes or write things down if we catch something really relevant to our work. We take social science to the beach; we read Duck of Minerva on our iPhones on the subway. At this point, I read basically everything with a pen in my hand. Who knows if you won’t find a cool quote buried in the middle of Anna Karenina?

Worse of course, is the absolutely impossible mountain of material in your field that you really should know if you want to somehow get into the top cut of journals. And who doesn’t want that? That’s the whole point. That’s why we do this to ourselves. We all, quite desperately I think, want our name up in lights in the APSR or IO. We all want to be invited to Rand or the State Department. I knew a guy who had the first page of his first APSR article embossed in gold to hang on his wall like a degree. (It was more tasteful than it sounds.) You’re always under-read, so you’re reading constantly. To be sure, your other friends in white collar profession work long hours too. That’s a constant now, but they almost certainly get paid substantially more than you and think that all you do is teach five or ten hours a week. In short, when I compare the work levels between myself and the professionals just in my family and friends (doctor, dentist, automotive engineer, nurses, lawyer, computer design tech), they make a lot more than me even though I work fairly equivalent hours.

Of course, I knew when I joined that academics don’t make a lot of money, and I accept that. We all do. Rather I am suggesting that, per work-hour, we make a lot less than most white collar professionals. That’s kinda depressing, because, e.g., we scarcely have the resources to travel much in the countries we write about. You’ve probably mentioned China in some of you published work, right? But how much time have you actually spent there? Does it feel right to generalize about a place you’ve never visited?

5. The hours I put in aren’t really reflected in my output.

Connected to point 4 is, at least in my experience, the many, many hours I spend reading, blogging, thinking that result in – not very much… I genuinely wonder how someone, say Pinker, can write an 800+ page book with hundreds of footnotes, that’s also really good. Wow. That just blows me away. I’m so impressed, and how cool that he’ll get invited onto Charlie Rose or something. Or, how do Fukuyama or Bobbitt crank out multiple books of that length? Or how did Huntington manage to write a major book in each of the 4 subfields of political science? Where does one get skills like that? That just makes me green with envy. For me, I’d be thrilled if I could just land a top ten journal piece sometime soon.

I am reminded of a complaint by Schiller about Goethe’s poetry. He envied Goethe’s ability to easily reel off lines and lines of wonderful material while he had to work very hard to produce much less. In Amadeus, Salieri complained that Mozart seemed to be taking dictation from God, even though he worked hard too. When I read really good IR, it makes me wonder how am I not fitting together what I read into good insights, whereas writers so much better than me seem to be able to do so. How do they do that? Are they reading social science all the time, on Christmas morning too? How much more do I have to read? I feel like I read all the time already. I find this a chronic source of professional frustration.

6. Few people really give a d— what you think.

Unless you scale those Huntingtonian heights and get to Charlie Rose or Rand, your reach is pretty limited. Policy-makers are bombarded with a huge volume of material, but I recall reading somewhere that they almost always consult internally produced material (memos and reports from within the bureaucracy) rather than the kind of stuff we generate on the outside. So we aren’t really policy-relevant much, unless you are the really big fish like Bernard Lewis (who got to meet W on Iraq – and blew it).

Beyond that, there are so many IR journals now (59 in the SSCI alone) that your work easily slips into the great ocean of Jstor. If you land APSR or ISQ, that’s awesome, but beyond the biggest IR journals that we all cite to each other, it’s hard to get profile for yourself. This may be another reason to blog. You can go around the editorial r&r process and speak directly to the community. But of course, blogging or op-eds aren’t peer-reviewed, and, as Steve Saideman noted, that is (and must be) the gold-standard. Worse, everybody’s blogging and tweeting and consulting now, so you’re still lost in the crowd. This too can be enervating and depressing, especially as you came into grad school as one of the better students of your college. You thought you were pretty smart, and you’d make a big splash. Now you find out that there are lots and lots of others in the field, all very smart and clamoring to be heard. Good luck.

7. I miss the ‘classics.’

The super-nerdy intellectual in me really misses this. Those black-edged Penguin Classics were the books that really got me interested in politics and ideas when I was in high school, and I never read them anymore. The first time I read Thucydides was an absolutely electric experience. I roared through it in 4 days. Same goes for stuff like On Liberty, Beyond Good and Evil, The Communist Manifesto, Darkness at Noon, 1984. God, I miss that stuff, the sheer intellectual thrill of new vistas opening. Now all I read is hyper-technical stuff, loaded with jargon, mostly from economics, so I can sound like a robot (defection, spirals, stochastic, satisficing, barriers to entry, iteration) when I talk if I need to. See Dan Nexon on this too.

As with everything else I’ve complained about above, I understand why we do this and I accept it. We can’t really read Plato or Bodin all day in IR, but I sure wish we could. I’ve often thought the IR should have a book series of classic works in our field with introductions and notes connecting classics like Thucydides, Kant, or Clausewitz to contemporary IR. We make throw-away references to these guys all the time in our introductions to make ourselves sound smart and grounded in the long tradition of political philosophy. But we don’t really read them, because we‘re reading post-Theory of International Relations stuff most of the time. When is the last time you opened up Sun Tzu or Machiavelli?

So taking a cue from Doyle’s effort to tie IR to the ‘Conversation,’ we could be release volumes like the Norton Critical Edition series or the Cambridge Texts in the History of Political Thought. But the selected texts would be more narrowly relevant to IR and the editorial matter and essays would explicitly connect the book to the IR. Reading Hobbes in an edition solely designed for IR readers would be pretty fascinating, no?

Bonus Immaturity: I knew I was a hopelessly cloistered academic the first time I glared at a difficult student over my glasses on the end of my nose, while sitting behind my desk. Good grief. I remember that pose from my own undergrad and that I wanted to punch professors like that…

Cross-posted on Asian Security Blog.

Share

The World Does its Duty & Conforms to Social Science: More on Korea & Japan

If academia’s taught me anything taught me, it’s that the real world is flawed, not theory, and that facts should change for me, not the other way around. As Marxists would say, ‘future is certain; it’s the past that keeps changing,’ and Orwell famously quipped that academics would love to get their hands on the lash to force the world fit theory. (I guess Heinlein agreed; check the vid.) So I am pleased to say that the world meet its obligations to abstraction this week a little: Japan and Korea edged a little closer toward a defense agreement (here and here). A little more of this, and I can safely ignore – whoops, I mean  ‘bracket’ – any real case knowledge…

Last week I argued that Korea and Japan seem like they’d be allies according to IR theory, but weren’t. I wrote, “Koreans stubbornly refuse to do what social science tells them;” obviously they don’t realize that abstraction overrules their sovereignty. I thought this was fairly puzzling, but I got an earful from the Korea/Asia studies crowd about how I was living in the clouds of theory. I also learned that area studies folks don’t really like it when you throw stuff like ‘exogenous’ and ‘epiphenomenal’ at them. Once they figure what ‘nomothetic’ actually means, they think you’re conning them. D’oh!

So for those of you argued I didn’t know anything about Korea or Japan (a fair point) but was just blathering on about theory that had no necessary time-space application to this case, I thought I’d put up this bit from Starship Troopers. It’s hysterical – when PhDs rule the world, apparently the military has to step in to prevent us from running it over a cliff. Didn’t Buckley once say he’d rather the first 2000 names of the Boston phone book run the US government than the faculty of Harvard?

Cross-posted at Asian Security Blog.

Share

In Social Science, You’re always Under-read, so How do you Choose ? (2)

imagesCAJL188O

Here is part one, where I noted Walt, the Duck, and Walter Russell Mead as the IR blogs I read almost always despite the avalanche of international affairs blogs now. Here are a few more:
Martin Wolf: Here’s a grad school education in IPE, op-ed by op-ed, better day-to-day than either Krugman or the Economist. Not being an economist, but facing regular student questions for years about the Great Recession and the euro-zone crisis, I have found Wolf indispensible in explaining what happened in the last 5 years – and without that ‘bankers as masters of the universe’ schtick coming from CNBC, Bloomberg, and the WSJ. Wolf is a delight to read. Like Andrew Sullivan, he is measured, changes his mind when information dramatically changes, references theory but not as ideology or fundamentalism, and has a good touch for what can realistically be accomplished in actual democratic politics.
 
Glenn Greenwald: Walt turned me onto Greenwald’s work, which I think is just super. The surfeit of links helps guide the reader to lots of supporting material, which should be a model of rigor for all bloggers. The writing is sharp and insightful, and he has a feel for real case law that academics focused on theory will never have. So when you feel like drones and warrantless wiretapping are probably illegal, but you don’t know anything about relevant statute, Greenwald shows the way. But most importantly, Greenwald, more than any other serious high-profile figure, has courageously, thanklessly insisted on publicizing all the legal violations, non-combatant deaths, and other violations that have flowed from the GWoT. His humanity over the river of blood unleashed by the GWoT embarrasses the coarse, ‘we-don’t-do-body-counts’ American concern for only US combat casualties. I can think of no author more prominent who ceaselessly reminds of all the brown Muslims we have killed and carnage we have wreaked in the Arab world, and rightly chastises us for not giving a damn. No other writer has changed my mind about our ‘ghost wars’ as much as he had, and no one else I can think of takes the journalist’s code of adversarial oversight as seriously. You could dump the whole op-ed team of the Washington Post for Greenwald, and that would be an improvement.
 
What other blogs am I missing? And if you say Thomas Friedman, you are never allowed to visit this site again.
 
2. Basic News
For basic news, I get the daily newsletters from FP and CFR. Does anyone else use these? I find them very helpful, and vastly more time efficient that watching CNN or TV news. I get BBC here which is pretty good, and SK news is ok, but in general I get less and less from TV.

3. Journals
When I got my first post-grad school job, I finally had the money to seriously subscribe to journals on my own, and I thought it looked pretty cool to list all those associations on my vita. So I went overboard, getting some mix of IO, IS, EJIR, SS, RIS, FP, FA, APSR, ISQ, IRAP, WP, RIPE. But this costs a mountain of money (especially when you live outside the US), and within a year, I learned there was just no way to read anything even close to that amount of material. So they piled up unread on my office shelves (hopefully they look imposing to visitors). Now I get the ToC e-updates instead and wait for someone to tell me that I need to read this or that article. I almost never simply open a paper copy of a serious IR journal and browse it as if it were a copy of the Economist. Does anyone read the journals that way, or do you hunt out specific pieces?

Now, if there is an article I really want, first I google it or jstor it. If that doesn’t work, I email the author directly. Does anyone else do this? I was too scared to do something like that in grad school, but once I started doing it a few years ago, I found that people almost always send me their stuff, and then some. Almost everyone keeps copies of their pubs in PDF, and it’s surely flattering to get solicited. It’s also a nice way to meet someone you who’s interested in a roughly similar area. (Also, for Asia folks, two good journals – the Korean Journal of Defense Analysis and Global Asia [sort of an FA for Asia] – offer their stuff gratis.) So here’s another question is, do you pay for journals (other than those you get from a membership like ISA)? And if so, which ones? IS, WP and IO probably, right?

Finally, all this is why Brian Rathbun’s call to ‘read more, write less’ is one of my favorite posts on the Duck. I cite it a lot, and I would add a Roosevelt corollary – read slowly, at a desk, with a pen in your hand. IR theory is tough; it can get d— complicated and dry. There’s no way you can read something mind-breakingly difficult like Perception and Misperception or Hierarchy in International Relations quickly, and follow the argument (well, I can’t). If we read more and wrote less, what we did write would be so much better, and we could slow this ‘avalanche of undone reading’ problem.

NB: Fukuyama is blogging again.

Cross-posted at Asian Security Blog.

Share

In Social Science, You’re always Under-read, so How do You Choose ? (1)

nerds-nerd-reading-book-glasses-demotivational-posters-1314307098

If there is one constant to modern social science, it is that you are always under-read. There is always some critical book you missed, some article you never had time for, some classic of which you only read the first and last chapters in grad school. And this is just the modern work immediately relevant to your field. After college you all but gave up on reading the ‘great books’ in the Chicago sense – Plato, Augustine, Mill, Nietzsche, etc. That’s the stuff that really got you interested in social analysis – you’ve still got a marked up copy of Aristotle’s Politics somewhere – but if you cite these guys today, it’s usually just a lifted quote from someone else’s modern social science book that you are reading. Your own black-edged Penguin Classics are collecting dust. If it wouldn’t be so uncomfortable, it would be fascinating to hear what ‘obligatory’ IR classics Duck readers haven’t actually read and why not.

One good measure of this overload in IR is the Social Science Citation Index list – there are now 59 SSCI journals just in IR, 112 in Political Science, and 750 total. They are publishing roughly 4 issues a year, 6 articles per issue. This is just a crushing load: 59 x 4 x 6 = my head explodes like that dude from Scanners. In the end, the best you can do is follow the top ten or so, plus maybe one or two in your unique area. Then come all the books. Just in the last 6 months, you know you need to read Fukuyam’a new book on order and Pinker’s on violence (so long!), and who even wants to touch yet another laughably misnamed IR ‘handbook’ so heavy you could use it as a doorstop in a tornado?

And blogging just makes this worse. Now, on top of all those article and books you haven’t read and which will embarrass you at the next APSA, everybody’s got a blog. But blogging feels so much nicer than articles – the style is gentler and more readable, there is some humor (a cardinal sin in the SSCI), blog-posts are mercifully short, and you don’t need to read them with a pen in your hand. So you’d rather read blogs than read the latest ISQ – hence you get lost on the internet all day and fall yet further behind.

On top of this course, you want have a life – your spouse couldn’t care less about the difference between counterforce and countervalue, and, truth be told, you care a lot more about the degenerative ad hoc emendations of Star Wars on disc than of some IR paradigm (realism as structural, offensive, defensive, critical postpositivist, whatever, I don’t know anymore).

So one thing I’ve wondered about since grad school is how other people in IR manage the massive data/research flow. What are you strategies for sorting through this huge flood of writing that is worth your time, a flood that is only increasing with the proliferation of IR blogs? I feel just overwhelmed, so here are my 3 lesson-learned to date:

1. Blogs

I get lots of RSS feeds but, like everything else, they’ve proliferated so much, that I blow through most of them now. Hence the inevitable culling toward just a few that I find regularly reliable/important:

Walt at FP: I guess everybody read this, right? Its very high-profile; he’s the chair of the best political science department in the world; he’s a great scholar; and the blog is really good. Eve if you disagree with him (I thought he was very wrong on Libya), I almost always find Walt worth the time.

Duck of Minerva: I guess since I write for this site, this is expected, but I think the stuff on the Duck is actually quality IR; sometimes it feels like grad school all over again. Laura Sjoberg, e.g., used to write long, theory-heavy posts so good that I got headaches. And I don’t think there are too many other strictly IR theory blogs. There are lots of blogs on international affairs generally understood – everybody wants to be Fareed Zakaria. But how many blogs written solely for IR theory, and written with its assumptions in mind, are there?

Walter Russell Mead: I think Mead is the best conservative intellectual writing about foreign affairs on a blog. He’s not a neocon, ideologue, red-state evangelical, American exceptionalist, militarist, or suffering from any of the usual military-industrial complex-worshipping, rightist pathologies that undermine the writing of people like Kaplan or the Kagans, much less Kristol, Krauthammer, etc. Mead also usefully insists on analytically stressing religion and culture, which IR should do more of. Finally, he’s also got a nice grounding in history – American, British, and classical – that gives his work real depth.

More in a few days.

Cross-posted at Asian Security Blog.

Share

Experiments, Social Science, and Politics

[This post was written by PTJ]

One of the slightly disconcerting experiences from my week in Vienna teaching an intensive philosophy of science course for the European Consortium on Political Research involved coming out of the bubble of dialogues with Wittgenstein, Popper, Searle, Weber, etc. into the unfortunate everyday actuality of contemporary social-scientific practices of inquiry. In the philosophical literature, an appreciably and admirably broad diversity reigns, despite the best efforts of partisans to tie up all of the pieces of the philosophy of science into a single and univocal whole or to set perennial debates unambiguously to rest: while everyone agrees that science in some sense “works,” there is no consensus about how and why, or even whether it works well enough or could stand to be categorically improved. Contrast the reigning unexamined and usually unacknowledged consensus of large swaths of the contemporary social sciences that scientific inquiry is neopositivist inquiry, in which the endless drive to falsify hypothetical conjectures containing nomothetic generalizations is operationalized in the effort to disclose ever-finer degrees of cross-case covariation among ever-more-narrowly-defined variables, through the use of ever-more sophisticated statistical techniques. I will admit to feeling more than a little like Han Solo when the Millennium Falcon entered the Alderaan system: “we’ve come out of hyperspace into a meteor storm.”

Two examples leap to mind, characteristic of what I will somewhat ambitiously call the commonsensical notion of inquiry in the contemporary social sciences. One is the recent exchange in the comments section of PM’s post on Big Data (I feel like we ought to treat that as a proper noun, and after a week in a German-speaking country capitalizing proper nouns just feels right to me) about the notion of “statistical inference,” in which PM and I highlight the importance of theory and methodology to causal explanation, and Eric Voeten (unless I grossly misunderstand him) suggests that inference is a technical problem that can be resolved by statistical techniques alone. The second is the methodological afterword to the AAC&U report “Five High-Impact Practices” (the kind of thing that those of us who wear academic administrator hats in addition to our other hats tend to read when thinking about issues of curriculum design), which echoes some of the observations made in the main report on the methodological limitations of research on practices higher education such as first-year seminars and undergraduate research opportunities — what is called for throughout is a greater effort to deal with the “selection bias” caused by the fact that students who select these programs as undergraduates might be those students already inclined to perform well on the outcome measures that are used to evaluate the programs (students interested in research choose undergraduate research opportunities, for example), and therefore it is difficult if not impossible to ascertain the independent impact of the programs themselves. (There are also some recommendations about defining program components more precisely so that impacts can be further and more precisely delineated, especially in situations where a college or university’s curriculum contains multiple “high-impact practices,” but those just strengthen the basic orientation of the criticisms.)

The common thread here is the neopositivist idea that “to explain” is synonymous with “to identify robust covariations between,” so that “X explains Y” means, in operational terms, “X covaries significantly with Y.” X’s separability from Y, and from any other independent variables, is presumed as part of this package, so efforts have to be taken to establish the independence of X. The gold standard for so doing is the experimental situation, in which we can precisely control for things such that two populations only vary from one another in their value of X; then a simple measurement of Y will show us whether X “explains” Y in this neopositivist sense. Nothing more is required: no complex assessments of measurement error, no likelihood estimates, nothing but observation and some pretty basic math. When we have multiple experiments to consider, conclusions get stronger, because we can see — literally, see — how robust our conclusions are, and here again a little very basic math suffices to give us a measure of confidence in our conclusions.
But note that these conclusions are conclusions about repeated experiments. Running a bunch of trials under experimental conditions allows me to say something about the probability of observing similar relationships the next time I run the experiment, and it does so as long as we adopt something like Karl Popper’s resolution of Hume’s problem of induction: no amount of repeated observation can ever suffice to give us complete confidence in the general law (or: nomothetic relationship, since for Popper as for the original logical positivists in the Vienna Circle a general law is nothing but a robust set of empirical observations of covariation) we think we’ve observed in action, but repeated failures to falsify our conjecture is a sufficient basis for provisionally accepting the law. The problem here is that we’ve only gotten as far as the laboratory door, so we know what is likely to happen in the next trial, but what confidence do we have about what will happen outside of the lab? The neopositivist answer is to presume that the lab is a systematic window into the wider world, that statistical relationships revealed through experiment tell us something about one and the same world — a world the mind-independent character of which underpins all of our systematic observations — that is both inside and outside of the laboratory. But this is itself a hypothetical conjecture, for a consistent neopositivism, so it too has to be tested; the fact that lab results seem to work pretty well constitute, for a neopositivist, sufficient failures to falsify that it’s okay to provisionally accept lab results as saying something about the wider world too.
Now, there’s another answer to the question of why lab results work, which has less to do with conjecture and more to do with the specific character of the experimental situation itself. In a laboratory one can artificially control the situation so that specific factors are isolated and their independent effects ascertained; this, after all, is what lab experiments are all about. (I am setting aside lab work involving detection, because that’s a whole different kettle of fish, philosophically speaking: detection is not, strictly speaking, an experiment, in the sense I am using the term here. But I digress.) As scientific realists at least back to Rom Harré have pointed out, this means that the only way to get those results out of the lab is to make two moves: first, to recognize that what lab experiments do is to disclose cause powers, defined as tendencies to produce effects under certain circumstances, and second, to “transfactually” presume that those causal powers will operate in the absence of the artificially-designed laboratory circumstances that produce more or less strict covariations between inputs and outputs. In other words, a claim that this magnetic object attracts this metallic object is not a claim about the covariation of “these objects being in close proximity to one another” and “these objects coming together”; the causal power of a magnet to attract metallic objects might or might not be realized under various circumstances (e.g. in the presence of a strong electric field, or the presence of another, stronger magnet). It is instead not a merely behavioral claim, but a claim about dispositional properties — causal powers, or what we often call in the social sciences “causal mechanisms” — that probably won’t manifest in the open system of the actual world in the form of statistically significant covariations of factors. Indeed, realists argue, thinking about what laboratory experiments do in this way actually gives us greater confidence in the outcome of the next lab trial, too, since a causal mechanism is a better place to lodge an account of causality than a mere covariation, no matter how robust, could ever be.
Hence there are at least two ways of getting results out of the lab and into the wider world: the neopositivist testing of the proposition that lab experiments tell us something about the wider world, and the realist transfactual presumption that causal powers artificially isolated in the lab will continue to manifest in the wider world even though that manifestation will be greatly affected by the sheer complexity of life outside the lab. Both rely on a reasonably sharp laboratory/world distinction, and both suggest that valid knowledge depends, to some extent, on that separation. This impetus underpins the actual lab work in the social sciences, whether psychological or or cognitive or, arguably, computer-simulated; it also informs the steady search of social scientists for the “natural experiment,” a situation close enough to a laboratory experiment that we can almost imagine that we ran it in a lab. (Whether there are such “natural experiments,” really, is a different matter.)
Okay, so what about, you know, most of the empirical work done in the social sciences, which doesn’t have a laboratory component but still claims to be making valid claims about the causal role of independent factors? Enter “inferential statistics,” or the idea that one can collect open-system, actual-world data and then massage it to appropriately approximate a laboratory set-up, and draw conclusions from that.

Much of the apparatus of modern “statistical methods” comes in only when we don’t have a lab handy, and is designed to allow us to keep roughly the same methodology as that of the laboratory experiment despite the fact that we don’t, in fact, run experiments in controlled environments that allow us to artificially separate out different causal factors and estimate their impacts. Instead, we use a whole lot of fairly sophisticated mathematics to, put bluntly, imagine that our data was the result of an experimental trial, and then figure out how confident we can be in the results it generated. All of the technical apparatus of confidence intervals, different sorts of estimates, etc. is precisely what we would not need if we had laboratories, and it is all designed to address this perceived lack in our scientific approach. Of course, the tools and techniques have become so naturalized, especially in Political Science, that we rarely if ever actually reflect on why we are engaged in this whole calculation endeavor; the answer goes back to the laboratory, and its absence from our everyday research practices.
But if we put the pieces together, we encounter a bit of a profound problem: we don’t have any way of knowing whether these approximated labs that we build via statistical techniques actually tell us anything about the world. This is because, unlike an actual lab, the statistical lab-like construction (or “quasi-lab”) that we have built for ourselves has no clear outside — and this is not simply a matter of trying to validate results using other data. Any actual data that one collects still has to be processed and evaluated in the same way as the original data, which — since that original process was, so to speak, “lab-ifying” the data — amounts, philosophically speaking, to running another experimental trial in the same laboratory. There’s no outside world to relate to, no non-lab place in which the magnet might have a chance to attract the piece of metal under open-system, actual-world conditions. Instead, in order to see whether the effects we found in our quasi-lab obtain elsewhere, we have to convert that elsewhere into another quasi-lab. Which, to my mind, raises the very real possibility that the entire edifice of inferential statistical results is a grand illusion, a mass of symbols and calculations signifying nothing. And we’d never know. It’s not like we have the equivalent of airplanes flying and computers working to point to — those might serve as evidence that somehow the quasi-lab was working properly and helping us validate what needs to be validated, and vice versa. What we have is, to be blunt, a lot of quasi-lab results masquerading as valid knowledge.
One solution here is to do actual lab experiments, the results of which could be applied to the non-lab world in a pretty straightforward way whether one were a neopositivist or a realist: in neither case would one be looking for covariations, but instead one would be looking to see how and to what degree lab results manifested outside of the lab. Another solution would be to confine our expectations to the next laboratory trial, which would mean that causal claims would have to be confined to very similar situations. (An example, since I am writing this in Charles De Gaulle airport, a place where my luggage has a statistically significant probability of remaining once I fly away: based on my experience and the experience of others, I have determined that CDG has some causal mechanisms and process that very often produce a situation where luggage does not make it on to a connecting flight, and this is airline-invariant as far as I can tell. It is reasonable for me to expect that my luggage will not make it into my flight home, because this instance of my flying through CDG is another trial of the same experiment, and because so far as I know and have heard nothing has changed at CDG that would make it any more likely that my bag will make the flight I am about to board. What underpins my expectation here is the continuity of the causal factors, processes, and mechanisms that make up CDG, and generally incline me to fly through Schipol or Frankfurt instead whenever possible … sadly, not today. This kind of reasoning also works in delimited social systems like, say, Major League Baseball or some other sport with sufficiently large numbers of games per season.) Not sure how well this would work in the social sciences, unless we were happy only being able to say things about delimited situations; this might suffice for opinion pollsters, who are already in the habit of treating polls as simulated elections, and perhaps one could do this for legislative processes so long as the basic constitutional rules both written and unwritten remained the same, but I am not sure what other research topics would fit comfortably under this approach.
[A third solution would be to say that all causal claims were in important ways ideal-typical, but explicating that would take us very far afield so I am going to bracket that discussion for the moment — except to say that such a methodological approach would, if anything, make us even more skeptical about the actual-world validity of any observed covariation, and thus exacerbate the problem I am identifying here.]
But we don’t have much work that proceeds in any of these ways. Instead, we get endless variations on something like the following: collect data; run statistical procedures on data; find covariation; make completely unjustified assumption that the covariation is more than something produced artificially in the quasi-lab; claim to know something about the world. So in the AAC&U report I referenced earlier, the report’s authors and those who wrote the Afterword are not content with simply content to collect examples of innovative curriculum and pedagogy in contemporary higher education; they want to know, e.g., if first-year seminars and undergraduate research opportunities “work,” which means whether they significantly covary with desired outcomes. So to try to determine this, they gather data on actual programs … see the problem?

The whole procedure is misleading, almost as if it made sense to run a “field experiment” that would conduct trials on the actual subjects of the research to see what kinds of causal effects manifested themselves, and then somehow imagine that this told us something about the world outside of the experimental set-up. X significantly covarying with Y in a lab might tell me something, but X covarying with Y in the open system of the actual world doesn’t tell me anything — except, perhaps, that there might be something here to explain. Observed covariation is not an explanation, regardless of how complex the math is. So the philosophically correct answer to “we don’t know how successful these programs are” is not “gather more data and run more quasi-experiments to see what kind of causal effects we can artificially induce”; instead, the answer should be something like “conceptually isolate the causal factors and then look out into the actual world to see how they combine and concatenate to produce outcomes.” What we need here is theory and methodology, not more statistical wizardry.

Of course, for reasons having more to do with the sociology of higher education than with anything philosophically or methodologically defensible, academic administrators have to have statistically significant findings in order to get the permission and the funding to do things that any of us in this business who think about it for longer than a minute will agree are obviously good ideas, like first-year seminars and undergraduate research opportunities. (Think about it. Think … there, from your experience as an educator, and your experience in higher education, you agree. Duh. No statistics necessary.) So reports like the AAC&U report are great political tools for doing what needs to be done.

And who knows, they might even convince people who don’t think much about the methodology of the thing — and in my experience many permission-givers and veto-players in higher education don’t think much about the methodology of such studies. So I will keep using it, and other such studies, whenever I can, in the right context. Hmm. I wonder if that’s what goes on when members of our tribe generate a statistical finding from actual-world data and take it to the State Department or the Defense Department? Maybe all of this philosophy-of-science methodological criticism is beside the point, because most of what we do isn’t actually science of any sort, or even all that concerned with trying to be a science: it’s just politics. With math. And a significant degree of self-delusion about the epistemic foundations of the enterprise.

Share

Pornography and National Security: The ever expanding threat

In today’s ‘horrors of bad social science’, we have a piece by Jennifer S. Bryson, director of the Witherspoon Institute’s Islam and Civil Society Project, (which seems to be a conservative think-tank) who has written a piece for the Institute’s blog on the threat of pornography for national security. (No really.)

Bryson asks the question that no serious scholar has ever, ever addressed and comes up with an argument to be considered. In fact, she is getting right on top of this hard and pressing issue.She reaches around the boundaries of conventional thinking about terrorism and slowly but steadily penetrates the burning question as to whether pornography drives a serious challenge to National Security:

With the tenth anniversary of the 9/11 attacks staring us in the face, we already know that our failure to have an approach to security that is robust and accurate has dire consequences. Pornography has long circulated nearly unbounded due to calls for “freedom,” but what if we are actually making ourselves less free by allowing pornography itself to be more freely accessible?
Are there security costs to the free-flow of pornography? If so, what are they? Are we as a society putting ourselves at risk by turning a blind eye to pornography proliferation?
I wonder further: Could it be that pornography drives some users to a desperate search for some sort of radical “purification” from the pornographic decay in their soul? Could it be that the greater the wedge pornography use drives between an individual’s religious aspirations and the individual’s actions, the more the desperation escalates, culminating in increasingly horrific public violence, even terrorism?

Let me tell you, now that we’ve been stirred to this threat – of young men somehow being converted to wicked, wicked ways – we need to act now, right here and now, damn it! Clearly the perpetrators of this filth have been very, very bad and need to be punished.

I believe that we all need to come together, scholars, government workers, NGOs, and throw caution to the wind. We need to straddle the division between us, fuse ourselves together and come up with an inspired solution. Let’s use each other to the very best of our abilities, and respond quickly to this vitally important need.

It’s Friday night so I’m just going to be at home thinking really long and hard about a solution to this problem. I’m just going to lie back right here by my lonesome self, thinking about nothing but pornography… for the sake of National Security.

Share

Holy Degrees of Freedom Problem, Batman!

Or, how do we compare the not so nearly like?  There is an obvious temptation to compare Libya to Syria and ponder why the US has not jumped into the fray now that Syria has started killing lots of people (to be fair, the piece does show how different the cases are).  Now, I am not a Middle East expert (and I avoid playing one on TV), but this is a handy opportunity to think about how we do comparisons and then maybe we can figure out what is relevant here.

In the first week of my big intro to International Relations class, I spend a bit of time explaining that there are few perfect comparisons in the world so that we must, indeed, compare apples and oranges.  I go on to show how similar the two fruit are in nearly every way save one, and then I bite into the unpeeled orange.  The point is to illustrate most similar comparisons and that we are always comparing apples and oranges.  I then go on to compare an apple and a frisbee*–a most different comparison–where the two objects share few common properties but both can be thrown.  I then compare Iraq to North Korea to suggest why one was, pardon the continued fruit obsession, low-hanging fruit.  One key difference was oil, but that was not the only one then (or now).

*  Some have used apples vs wolverines as the alternative to apples and oranges but a frisbee is far safer in the classroom, not matter who end ends up catching a disk with their face.

As a result of doing this every year, I have now started looking at things like Libya and Syria and think: how comparable are these two cases?  Is Syria more of an orange to Libya’s apple or is it more frisbee-esque?  The similarities are obvious: two Middle East countries where the dictators are responding to protest by using force.  Asad does not have Qaddafi’s fashion sense, but, otherwise, the two cases seem pretty similar.  So, it seems that we have a most similar comparison, but there are several differences between the two cases, so it is hard to tell which ones matter the most.

What are the differences?

  • Libya is far closer to the heart of Europe.  Proximity matters not just in power projection (which is what the realists would consider), but also in terms of migration projection (what Kelly Greenhill would consider). The Europeans, especially the French, are far more energized about the Libyan situation than the Syrian one perhaps because their rising xenophobia makes any immigration politically dangerous.
  • Libya was always very isolated from the rest of the Arab world.  Indeed, Qaddafi has done a fine job over the years of alienating pretty much everyone in the region and beyond.
  • We care more about Turkey’s objections when it comes to a neighbor than when it was a distant Libya (same goes for Israel and other allies).
  • Syria is AFTER Libya.  Sequencing matters.  That is, the US and its allies have finite capabilities, so there is less left on the shelf if one wanted to be more coercive.  Plus, being AFTER also means that the costs and limitations of the Libyan effort make a Syrian intervention all that less attractive.  Libya reminds us that this stuff is really hard.  

We can probably think of other differences (but again, I am not a specialist in this part of the world).  The key here is that there are more differences than cases, so it is very hard to figure out which of these factors matters the most–the aforementioned degrees of freedom problemo.  One thing to keep in mind is that the US was not looking to intervene in Libya but was actively trying to avoid doing so.  The Europeans got the US involved.  Lacking that push this time and facing a far more complex environment, the US is likely to do less here.

Share

Fortune-Tellers of Foreign Policy

 Congressional hand-wringing over America’s inability to forecast the Egyptian and Tunisian revolts is unsurprising given the foreign policy hubris that dominates in Washington today.  How can it be, the cry goes out, that America, was blindsided by these earthshaking events?  Doesn’t “exceptional” America see further and act more wisely than other nations? 

Sadly, that arrogant and delusional mindset is unlikely to be changed even by this latest “intelligence” failure.  Rather than questioning whether anyone could have predicted this kind of event—let alone whether we should be trying to control the future of other societies—the response is likely to be:  let’s throw more money at the problem! 
In fact, this “failure” is part of a broader, failed effort to know and control the foreign policy future, led by groups like the Defense Advanced Research Projects Agency (DARPA) and its Integrated Crisis Early Warning System (ICEWS).  As Noah Shachtman points out (h/t Dan Nexon), this project has gobbled up hundreds of millions of dollars.   Yet its predictions are no better than those of a handful of area specialists—or, probably, a cup of tea leaves.
In that light, ICEWS and DARPA are useful primarily to keep Defense Department and “intelligence” budgets growing.  What better way to generate a constant flow of dollars than having not only trumped up “threats” like terrorism–but also ” crisis forecasts” that would require immediate, costly “readiness” efforts?
Consider, for instance, what might have happened if ICEWS had in fact foretold Mubarak’s resignation a year ago?  What could the U.S. have done with that information?

First of all, unless the system was 100% accurate, it would no doubt have been dangerous or at least unpredictable to do anything.  In any case, doing something would no doubt have screwed up the model itself.  But leave aside that trifling matter.  Better yet, invent a technological fix, a feedback factor!  If the seers of DARPA can predict the future, let’s also allow them to feed any reaction into their computers and predict how it would affect the model, again with 100% accuracy.  
A year ago, our good ally and Hillary Clinton’s dear friend, Hosni, seemed untouchable.  Autocratic “stability” reigned supreme in the Middle East–just as America has long preferred.  In that case, top U.S. officials, perhaps Hillary herself, would no doubt have tipped him off to his predicted end.  Yet I somehow doubt that Mubarak would have taken the news submissively, cowed by some pointy-headed modelers.  Rather, he would have unleashed a wave of additional repression against those he deemed likely to foment unrest. 
What if we’d kept the prediction to ourselves?  Would the U.S. have started quietly pulling diplomatic staff from Egypt, discreetly advising tourists to head to Cancun rather than Cairo, or at least tipping off the hotheads in Congress who are desperate to be ahead of the curve?  Perhaps. 
But one thing is certain:  There would have been a large uptick in defense department contingency planning and spending—justified by “science,” but to little useful purpose.
What about a seemingly beneficent example of forecasting the future—to take the most extreme one, predicting when genocide will occur with the idea of preventing it?  Certainly, that would be a wonderful thing and could save countless lives—if, again, it could be done with 100% accuracy. 
But, just as in the Egyptian revolt case, there are far too many unknowns to predict this kind of result far enough in advance to prevent it.  Certainly, there may be  “warning signs”—like a repressive government suddenly issuing cards identifying all members of a nation by ethnicity, or preparing a plan to systematically slaughter them.  But do we really need massive computer programs to pick these things up? 
Our tools for doing anything in the face of these signs are in any case crude—though peaceful conflict prevention measures would probably be worth trying in some cases.  But what about a massive military intervention before a genocide had started?  This seems infeasible—and in fact likely to trigger the very thing it is aimed to halt.  Milosevic’s reaction to the bombing of Yugoslavia in 1999 after the Rambouillet ultimatum is a case in point—perhaps not genocide, per se, but certainly mass expulsions prompted by international actions against him.
Major events like the fall of a government or genocide are highly unpredictable beyond a very short time frame.  And, if one does not have 100% certainty, taking any action pre-emptively will often make matters worse. 
In short, programs like ICEWS are yet another case of spending huge amounts on efforts whose overt benefits are questionable—even if the covert benefits, for the government contractors and military, are huge.  Whether for good or ill, godlike efforts at predicting the future are sinkholes of squander.  Admittedly, they are small-scale in the deeply cratered landscape of wasteful defense department spending.  But wouldn’t it be refreshing if a few Congressional gadflies critiqued such programs not for their failures to predict the future–but for their very conception?
The underlying mindset is even riper for critique.  The self-styled deities of our foreign policy establishment do not rest content with predicting the future.  Their real intent is to play God—to control the future.  Consider just one irritating example, Aaron David Miller, on NPR yesterday morning.  (I do not know Miller and use him only as an example from among many possible figures who have made similar statements in recent weeks.)
In his view, the U.S. is “in the worst of all possible worlds with grand expectations and supporting very important values, but without the capacity and leverage to implement a preferred American outcome or even an outcome in Egypt that we can control.”  According to NPR, Miller believes this is a part of a long-term trend in which U.S. credibility is reaching all time lows. “We are neither admired, respected or feared to the degree that we need to be in order to protect our interests, and the reality is — and this is just another demonstration of it — everybody in this region says no to America without cost or consequences [Afghanistan’s] Hamid Karzai says no, [Iraq’s] Maliki on occasion says no, [Iran’s] Khamenei says no, [Israel’s] Netanyahu says no. Mubarak says no repeatedly.”
How shocking!  The leaders of independent states, even our own client states, say No to us!  Our vast “hard power” doesn’t put the “fear” of God into our enemies—or our friends.  The “very important values” we supposedly support don’t generate respect.  (Remind me by the way, what those values are, given likely extension of the Patriot Act, continuing detentions at Guantanamo Bay, rampant drone strikes, etc., etc.)  If only we had the right DARPA model!  Maybe then the people and the leaders of other countries would do our bidding.   
In fact, the idea that the Gods of Government can control the politics of other lands, when they can’t even control our own, would be laughable if it weren’t so costly in dollars and lives.  And that conceit, unfortunately extends well beyond the Beltway to the broader foreign policy “elite” in our country, as I’ve written about and critiqued before.
Don’t get me wrong.  As a social scientist, I think it makes sense to do research to understand and explain the world.  Public policy should be based on the best available information, and in some cases, it may be wise to make large public expenditures on the basis of predictions.  But controlling and even predicting human societies except in the broadest of generalities is a fool’s errand.
So, today, notwithstanding my happiness at Mubarak’s resignation and my admiration of the protesters, I can only hope–but certainly not predict–that Egypt will in fact develop a more democratic government in the future.  I can say that the Egyptian army, like militaries around the world including our own, is not exactly know for its democratic values.  I can say that “people power” may be able to keep the trend toward democratization going.  But in the end, the next stages of the Egyptian revolution are as unpredictable as any major social phenomenon–notwithstanding the fond dreams of the wannabe DARPA deities and their avaricious acolytes. 


Share

Political Models vs. Current Events

Noah Schachtmann at Wired:

In the last three years, America’s military and intelligence agencies have spent more than $125 million on computer models that are supposed to forecast political unrest. It’s the latest episode in Washington’s four-decade dalliance with future-spotting programs. But if any of these algorithms saw the upheaval in Egypt coming, the spooks and the generals are keeping the predictions very quiet.

Instead, the head of the CIA is getting hauled in front of Congress, making calls about Egypt’s future based on what he read in the press, and getting proven wrong hours later. Meanwhile, an array of Pentagon-backed social scientists, software engineers and computer modelers are working to assemble forecasting tools that are able to reliably pick up on geopolitical trends worldwide. It remains a distant goal.


The benefits and the costs:

“All of our models are bad, some are less bad than others,” says Mark Abdollahian, a political scientist and executive at Sentia Group, which has built dozens of predictive models for government agencies.

“We do better than human estimates, but not by much,” Abdollahian adds. “But think of this like Las Vegas. In blackjack, if you can do four percent better than the average, you’re making real money.”

Over the past three years, the Office of the Secretary of Defense has handed out $90 million to more than 50 research labs to assemble some basic tools, theories and processes than might one day produce a more predictable prediction system. None are expected to result in the digital equivalent of crystal balls any time soon.

In the near term, Pentagon insiders say, the most promising forecasting effort comes out of Lockheed Martin’s Advanced Technology Laboratories in Cherry Hill, New Jersey. And even the results from this Darpa-funded Integrated Crisis Early Warning System (ICEWS) have been imperfect, at best. ICEWS modelers were able to forecast four of 16 rebellions, political upheavals and incidents of ethnic violence to the quarter in which they occurred. Nine of the 16 events were predicted within the year, according to a 2010 journal article [.pdf] from Sean O’Brien, ICEWS’ program manager at Darpa.

Darpa spent $38 million on the program, and is now working with Lockheed and the United States Pacific Command to make the model a more permanent component of the military’s planning process. There are no plans, at the moment, to use ICEWS for forecasting in the Middle East.

All of this, I must say, is pretty predictable.

Share

Causation, Correlation, Aggression, and Political Rhetoric

John Sides at the Monkey Cage weighs in with some social science on the relationship between militant metaphors in political speech and individuals’ willingness to engage in actual political violence against government officials. The findings he cites: an experimental study has shown there seems to be no effect on the overall population of exposure to “fighting words” in political ads, but there is an effect on people with aggressive tendencies. Moreover:

This conditional relationship — between seeing violent ads and a predisposition to aggression — appears stronger among those under the age of 40 (vs. those older), men (vs. women), and Democrats (vs. Republicans).

But his real point is that we should be cautious of inferring from this or any wider probabilistic data causation regarding a specific event:

To prove that vitriol causes any particular act of violence, we cannot speak about “atmosphere.” We need to be able to demonstrate that vitriolic messages were actually heard and believed by the perpetrators of violence. That is a far harder thing to do. But absent such evidence, we are merely waving our hands at causation and preferring instead to treat the mere existence of vitriol and the mere existence of violence as implying some relationship between the two.

Share

Social Science: Apparently Too ‘Sciencey’ for the Iranian Government…

so says a senior Education Ministry official:

“Expansion of 12 disciplines in the social sciences like law, women’s studies, human rights, management, sociology, philosophy….psychology and political sciences will be reviewed,” Abolfazl Hassani was quoted as saying in the Arman newspaper.

“These sciences’ contents are based on Western culture. The review will be the intention of making them compatible with Islamic teachings.”

The Ayatollah added:

“Many disciplines in the humanities are based on principles founded on materialism disbelieving the divine Islamic teachings,” Khamenei said in a speech reported by state media.

“Thus such teachings…will lead to the dissemination of doubt in the foundations of religious teachings.”

I have no doubt that this will only serve to elevate Iranian social science to new heights.  I mean, it isn’t like doubt is fundamental to the scientific enterprise or anything…

[via The Monkey Cage]

Share

What If Political Scientists Wrote the News?

From Christpher Beam at Slate:

A powerful thunderstorm forced President Obama to cancel his Memorial Day speech near Chicago on Monday—an arbitrary event that had no affect on the trajectory of American politics.

Obama now faces some of the most difficult challenges of his young presidency: the ongoing oil spill, the Gaza flotilla disaster, and revelations about possibly inappropriate conversations between the White House and candidates for federal office. But while these narratives may affect fleeting public perceptions, Americans will ultimately judge Obama on the crude economic fundamentals of jobs numbers and GDP.

Chief among the criticisms of Obama was his response to the spill. Pundits argued that he needed to show more emotion. Their analysis, however, should be viewed in light of the economic pressures on the journalism industry combined with a 24-hour news environment and a lack of new information about the spill itself…

Read the rest here. Commentary from Andrew Gelman at The Monkey Cage.

Share

“Statistics is the New Grammar”

[Cross-posted at Signal/Noise]

In the latest issue of WIRED, Clive Thompson pens a great piece which echoes a sentiment I’ve touched on before: in a data-driven world it is critical that all citizens have at least a basic literacy in statistics (really, research methodology broadly, but I’ll take what I can get).

Now and in the future, we will have unprecedented access to voluminous amounts of data. The analysis of this data and the conclusions drawn from it will have a major impact on public policy, business, and personal decisions. The net effect of this could go either way–it can usher in a period of unprecedented efficiency, novelty, and positive decision making or it can precipitate deleterious actions. Data does not speak for itself. How we analyze and interpret that data matters a great deal, which puts a premium on statistical literacy for everyone–not just PhDs and policy wonks.

Thompson notes a number of statistical fallacies that many, including members of the media, fall prey to. Using a single event to prove or disprove a general property or trend is one spectacular one that we see all the time, particularly with large, macro-level events. Regardless of what side of the climate change debate you are on a single snow storm or record-breaking heat wave does not rise to the level of hypothesis-nullifying or -verifying evidence.

There are oodles of other examples of how our inability to grasp statistics–and the mother of it all, probability–makes us believe stupid things. Gamblers think their number is more likely to come up this time because it didn’t come up last time. Political polls are touted by the media even when their samples are laughably skewed.

Take correlation and causation. The cartoon below nicely illustrates the common fallacy that the correlation of two events is enough to prove that one causes the other:

In thinking about this I remembered an argument I had with a number of colleagues while in grad school over why they had to be at least somewhat literate in quantitative analysis and game theory since they never intended to use such methods. Given that we will only see an increase of data and data-based (no pun intended) arguments, policies, and decisions we need to, at a minimum, be able to understand how the results were achieved and whether or not the studies are flawed. Patrick is probably the last person to apply quantitative methods to social scientific problems, but he can certainly speak the language with the best of them.


Bottom line: the importance of statistical literacy will only increase. Statistics will come to permeate our lives, more so than ever before. We had better be able to speak the language.
Share

What Are the Hardest Problems in the Social Sciences?

A bunch of “big thinkers” sat down at Harvard last week to debate this question. You can watch their videos here.

Some of the “problems” put forth were simply timely empirical issues with important normative content, like how to close the gender gap in pay equity, or reduce the “skills gap” between Blacks and Whites, or how to understand the relationship between ethnic diversity and civil war.

But some were more wide-ranging:

What is the biggest falsehood perpetuated in the social sciences today?

How and why does the “social” become “biological”?

If we know that individuals are susceptible to all kinds of biases and don’t always make rational decisions, how do we decide what’s “good”?

Being only a “medium-size thinker,” I didn’t speak at this event (or even know about it until afterward). What answer would I have given if I had? For me, the hardest problem in the social sciences is how to identify and measure the significance of non-events, without turning them into events.

The study concludes by asking a hard-to-answer question of its own: How hard are these problems for social scientists, really? (And how important are they?) These themselves are apparently tricky enough questions that Harvard has designed a survey to crowdsource an answer: click here read all the “problems” described by the various speakers and code them on a 1-5 scale of hardness and importance.

What do you think are the hardest problems in the social sciences? Leave a comment below, or post your answer on the group’s Facebook page.

Share

Does Social Science Training make us Selfish and Immoral?

Tim Hartford (whose blog at FT.com you really must read) discusses the results of a recent survey that suggest the answer is yes:

A recent survey by Yoram Bauman and Elaina Rose, two economists from the University of Washington, explains that in experiments, economics students are less generous, more likely to choose an unco-operative approach and more likely to accept bribes.

Bauman and Rose’s survey built upon an earlier study 30 years ago which demonstrated that “postgraduate students of economics were more likely than others to “free ride” in a laboratory game, effectively exploiting other players for their own benefit.”

I tend to agree with Hartford’s supposition that what is really going on here is that economists–as well as political scientists and sociologists–are simply choosing optimal strategies based on the game theoretic models upon which the laboratory experiments are based. Cooperation is not inherently a good strategy, but rather one that is determined by the structure of a game or experiment (e.g. what is the payoff structure of particular combinations of choices, is the game a one-shot deal or is it iterated, etc). Social scientists are trained in, and therefore comfortable with, game theory and the various structures and payoffs that exist. It is reasonable then to assume that if placed in an experiment that mimics those structures and payoffs they are more likely to play the most dominant strategies.

For example, if I recognize that the experimental situation is a one-shot, Prisoner’s Dilemma then I am going to defect rather than choose to cooperate. Why? Because the outcome depends on my choice as well as my fellow subject, and the structure of the game dictates that defection is the dominant strategy for both parties–why assume the other subject would choose differently, particularly given the risk I run of a huge loss if they don’t choose to cooperate? Now, if they game is iterated and neither of us know when we will stop having to choose to cooperate, the shadow of the future makes cooperation a more dominant strategy. As Hartford noted:

[P]erhaps the budding economists are not truly mean and selfish, but are simply showing that they have mastered their studies by producing the behaviour described in simple textbook models. Arguably, the students of economics are not doing anything sinister, any more than if they calculated the roots of a quadratic equation.

There is also the possibility that those that choose to enter postgraduate training in the social sciences are simply more jaded, cynical, or “realist” in their worldview. And while they may hold personal views that cooperation and selfless behavior are desirable and moral endpoints, their research and training illustrates to them that in many cases it can be unproductive (or, in some cases, counterproductive) to cooperate oneself without taking into account what others will do.

[Cross-posted at Signal/Noise]

Share

Applying Social Science Concepts to Business: E-Book Edition

[Cross-posted at bill | petti]

Sunday’s Wall Street Journal reported that Amazon has stopped selling Kindle versions of all Macmillan titles. John Sargent, Macmillian’s CEO, recently went to Amazon’s headquarters to try and negotiate new terms for the sale of e-books published by his company. In general, the publishing industry has been unhappy with Amazon’s insistence that most books be priced at $9.99. Apparently, the discussions resulted in Amazon pulling all Macmillan e-books from it’s website.

I am a firm believer that the historical knock on the social sciences is unwarranted and that many of the theories, frameworks, and concepts found in the various disciplines are widely applicable in the real world, business in particular. So when I read about the Amazon-Macmillan dispute I was struck at how a number of social science concepts shed quite a bit of light on these developments; namely Albert Hirschman’s concepts of exit, voice, and loyalty as well as signaling and the indirect use of force.

So what do these concepts have to do with e-books? Glad you asked.

In his classic Exit, Voice, and Loyalty, political economist Albert Hirschman provided an elegant framework for analyzing the options available to individuals when they become displeased with actions of an organization. According to Hirschman, individuals have three options: they can be loyal to the organization, they can exercise voice (e.g. protest, negotiation), or they can exit the organization (e.g. join a new group, shop at a new story, etc). The framework is quite elegant and can easily be applied to both explain and predict the behavior of consumers in a market or citizens in a political system.

Since the launch of Amazon’s Kindle, book publishers have tried to exercise their voice vis-a-vis Amazon and their pricing requests, but to little avail. Until now, voice and loyalty seemed the only realistic options. Sure, there are other e-book retailers out there, but success of Amazon’s Kindle and the attractive prices they set for their customers provided the retailer with a huge advantage in terms of a distribution channel. However, with the launch of Apple’s iPad, book publishers now have a more realistic exit option. Not only is Apple a potentially powerful sales channel, but they have agreed to pricing terms that are more favorable to publishers than Amazon (Apple will take 30% of whatever price publishers choose to charge, leaving the price point up to individual publishers).

When individuals have the option of exit, we should see typical market dynamics at work–i.e. customers can shop around to various suppliers to find the products they want at the price they want, with competition among those suppliers driving the quality of products higher and the price for goods lower. This is why we generally abhor monopolies, since by nature they stifle market dynamics and leave customers with only the options of loyalty or voice, meaning they lack much leverage. With the launch of a new and potentially powerful sales channel, publishers now have a more realistic exit option that can be brought to the table in negotiations with Amazon.

However, rather than alter the current pricing terms with Macmillan as a result of this new exit option, Amazon stopped distributing Macmillan’s e-books altogether. The question, of course, is why? I would posit that Amazon was trying to send a signal to dissuade other publishers from also trying to renegotiate terms. Now I have no information as to what Sargent may have proposed and if any ultimatums were given, so what follows is purely an intellectual exercise.

We can view Amazon’s move as a deterrent threat to other publishers who, emboldened by Apple’s entry into the market, may attempt a similar renegotiation. By harshly punishing one actor (i.e. refusing Macmillan access to a valuable and dominant sales channel) that attempted to change the status quo (Amazon’s preferred pricing structure), Amazon hopes to send a signal to other potential actors to not attempt something similar. This is a great example of signaling and the indirect use of force, two related concepts that economists (such as Michael Spence and Thomas Schelling) and political scientists (such as Robert Jervis and James Fearon) have fleshed out over the past 40+ years. Rather than having to expend resources forcing every potential adversary to either change their behavior or maintain the status quo, an actor can choose to send a signal to all potential adversaries by making an example of one of them. Not only can an actor make a threat to punish their adversaries, but they can also demonstrate that they have both the capability and the will to do so by carrying out such a punishment on one adversary.

This dynamic is accentuated in systems where one actor faces challenges from many potential actors versus just one. Barbara Walter has looked at why some states decided to deal with separatist groups and factions in a violent manner versus through negotiations. The key variable: the number of potential separatist groups that may also seek self-determination. As the number of potential adversaries increases the probability of solving these disputes through negotiation decreases. When faced will many potential challengers, governments will choose to demonstrate their willingness and ability to put down rebellions in order to deter other separatists groups from similar challenges. In other words, having reputation for resolve when dealing with adversaries becomes more important when you face many potential threats than just one.

In the case of Amazon, it could be that seeing the potential for many actors to attempt to renegotiate the current pricing structure it was decided that they should send a signal to the rest of the publishing world that attempts to change the status quo would not only fail, but would result in sever punishment (i.e. the loss of a popular sales and marketing channel). My guess is that this likely won’t work for two reasons: 1) as mentioned earlier, the publishers actually have someplace else to go–they can exit the current relationship and cast their lot with Apple; and 2) Amazon is heavily reliant on the book publishers. Without their titles the allure of a Kindle decreases. The threat may not be credible, or at least sustainable for long.

Thoughts?

Share
« Older posts

© 2019 Duck of Minerva

Theme by Anders NorenUp ↑