Page 136 of 274

Friday Nerd Blogging




“killer kids”

“torture and execution”




“weird science”

This President’s Day week, tickets went on sale for The Hunger Games movie. While you re-read the books and prepare yourself to be generally underwhelmed by the film (seriously? casting kindly-wise-old-Father-of-Elizabeth-Bennett as a bloodthirsty dictator?), let me encourage you to purchase The Hunger Games Companion.

The Companion is jam-packed with history, science, survival facts, literary analysis, mythology and trivia about real-life sites around which the key phenomena in the novels manifest sociologically, in real life both past and present. Each chapter is like a mini-syllabus for a separate first-year honors seminar in some grisly topic. Each is a window onto a dark underside of human history and politics.

One chapter you won’t see in The Companion would be titled “International Relations.” That of course is because there are no international relations in Panem, since apparently there are no other humans anywhere in sight. My son is starting in on the books hungrily now, and he noticed this right away: “Why aren’t they worried about invasions by other Capitols?” It’s an interesting question, but there you have it: the story is one of domestic affairs only, with no two-level games.

I wonder what readers of this blog think this means in terms of the sociological value of the story, and I also wonder how the story would have been different if at all had President Snow needed to contend with international diplomacy while keeping his subjects under the heel.


Learning to Fish Through Human Rights Data

I often encourage my students to distill complex analytical concepts into terse, plain English.

But some things can’t be boiled down to a tweet, as I discovered this week when attempting to explain Cingranelli-Richards data coding in response to Joshua Foust’s queries on my abusers’ peace post.

What I didn’t think to tell him in response to his original question was: here is how you can look it up for yourself.

So this post contains (I hope) a better answer to Josh’s question but also a brief primer on the CIRI dataset, what it contains and how to use it.

I should add that I’ve never used it for research myself, that I don’t work with large data-sets, and that I’m not claiming to think the coding is perfect. But if you ever need to look up the answer to a question like: “what are the states that are similar to Singapore in both regime type and human rights record?” CIRI provides a user-friendly resource for a little fact-checking.

Here’s what the data-set contains: CIRI consists of quantitative scores for government respect for 15 internationally recognized human rights for 195 countries, annually from 1981-2010. The rights coded include: physical integrity rights (like no torture, disappearance or summary execution), empowerment rights (free speech, free assembly, freedom of religion, and the right to vote) and indicators for women’s and workers’ rights (here are the descriptions). Scores on each for each country-year were derived by coders drawing on Amnesty and State Department country reports for that year (codebook here).

CIRI is primarily designed to be used by students of human rights for large-N regression analysis, but here’s how journalists, bloggers, or can use CIRI to answer basic questions about countries’ human rights records at a glance :

1) Create a CIRI account.

2) Go to Download Data. Click Create New Dataset

3) Select just the variables and years you want.

4) Compare them in an excel sheet.

5) Sort the Excel sheet columns according to the question you’re asking.

Here’s how I used it to answer Joshua’s question on countries like Singapore (and a better explanation of my answer): I created a personal spreadsheet for just the year 2010. As a measure of “human rights performance” I looked at just the physical integrity index already created by CIRI (which combines scores on torture, extrajudicial killing, political imprisonment and disappearances). The index ranges from 0 (no government respect for these four rights) to 8 (full government respect for these four rights).

For “freedom” I created my own index by also downloading the columns for freedom of association, freedom of speech, electoral rights, and independence of the judiciary. These are coded from 0-2, with 0 being the worst score.* So my CIRI dataset included the CIRI variables labeled PHYSINT, INJUD, ELECSD, ASSN, and SPEECH. I used Excel to create a column averaging these last four columns numbers for each country, and then compared my country average score on “freedom” to the CIRI country score on “political integrity rights.”

How did I do this at a glance without statistical analysis fast enough to respond to a tweet? Easy. I just sorted the PHYSINT column by largest number first, so the best human rights performers are at the top, and the worst are at the bottom. Among those countries who receive an “8” it’s very easy to tell who are the free-est and least free – they vary in rank order on the other column between .25 and 2 (they can go as low as 0 but don’t for these high human rights performers). If you scroll down into the 7s (which is where Singapore sits) you can see the same distribution.

Now, Joshua’s question was which countries were similar to Singapore – relatively free, but relatively poor human rights record – and I listed the countries that score as well or better on human rights (7 or 8) but as bad or worse on freedom (.75, .5, .25, or 0). But his real question is to what extent they are outliers among these high-performers. (The comparison should be the the other high performing countries, not to all 192 countries in the dataset.) And to some extent Josh’s hunch is correct: 67 countries receive the 7 and 8 rankings for human rights, and only 6 score at .75 or worse on freedom (Singapore, Djibouti, Qatar, Bahrain, Seychelles, and Oman). However that doesn’t mean that all the other high performers are at the upper ranking for freedom – lots are in the middle ground. Only 29 out of these 67 have the highest democracy score. So on the one hand if you have the highest human rights score you have a 43% chance of being a full-fledged democracy, and only an 8% of being an autocracy. But you also have a 52% chance of falling somewhere in the middle on the democracy scale.

Within the middle and lower grades on human rights performance, especially the middle performers receiving a grade of 4, 5 or 6, there is wide variation in the relative freedom score. So while even eyeballing this data you can see a relationship between rights and democracy, the correlation is certainly imperfect. States like Singapore and Qatar that score super high on one indicator and super low on the other are indeed outliers, as are states like South Africa with mid-high freedom scores but low human rights performance. What these cases show, though, is that we have to qualify our conflation of human rights and democracy and think more about how this relationship works.

But the key point here is: this data is at anyone’s fingertips who wants to look at it independently or play with it for their own purposes.

*This is NOT the measure of democracy used in the studies I wrote about, both of which use a different dataset, Polity IV, to measure democracy. I used the CIRI measures as a short-cut because a user can easily compare them to one another.


War Crimes and the Arab Spring. Again.

The direct targeting of actors protected under the laws of war has been one of the most disturbing trends arising out of the Arab Spring. For example, the targeting of medical workers and ambulance drivers was well documented and reported on last year. Additionally, here at the Duck we’ve been following the issue. In recent months Dan Nexon wrote about the targeting of doctors who treated protesters in Bahrain and I’ve bloged about the growing concern of the ICRC who have seen themselves and their workers targeted. Unfortunately, this trend has continued into 2012. In January, the vice-president of the Syrian Red Crescent Abdulrazak Jbeiro was shot and killed in circumstances described as “unclear” – an act that was widely condemned by the the ICRC and officials world wide.

The deaths of Marie Colvin and Remi Ochlik are an example of another neutral actor in wartime that has frequently been targeted – the press. Accredited journalists are protected under the laws of war, specifically the 1949 Geneva Conventions and Additional Protocol I. If they are wounded, sick (GCI 13(4)) or shipwrecked (GCII 13(4)) they are given protections. If they are captured, accredited correspondents are to be given POW status. (GCIII 4A(4)). Additional Protocol I devotes an section to the protection of journalists:

Art 79. Measures or protection for journalists
1. Journalists engaged in dangerous professional missions in areas of armed conflict shall be considered as civilians within the meaning of Article 50, paragraph 1.
2. They shall be protected as such under the Conventions and this Protocol, provided that they take no action adversely affecting their status as civilians, and without prejudice to the right of war correspondents accredited to the armed forces to the status provided for in Article 4 (A) (4) of the Third Convention.
3. They may obtain an identity card similar to the model in Annex II of this Protocol. This card, which shall be issued by the government of the State of which the Journalist is a national or in whose territory he resides or in which the news medium employing him is located, shall attest to his status as a journalist.

(A good and longer summary of the rules may be found here.

It is true that these rules in the 1949 Geneva Conventions and API are for international (and not internal) armed conflict. But as non-combatants the direct targeting of these individuals would also be illegal under any legal framework. Further, it can be argued that directly targeting aid workers and journalists is a clear violation of customary international law for both international and non-international armed conflict.

This is, of course, on top of the relentless shelling, bombing and targeting of civilians by Syrian forces. While the deaths of these journalists once again highlight what is going on, we should not lose sight of the fact that it would seem, at best, thousands of civilians have died in the conflict since last year. The methods employed by the Syrian armed forces come nowhere near the standards by which we measure the conduct of hostilities.
Worse, it is clear that civilians are suffering great deprivations as a result of the uprising and crackdown. This has lead the ICRC to specifically request access to the civilian population in order to deliver food, water, medicine and fuel.

Last year the ICRC launched a campaign about that which impedes the delivery of assistance and aid in areas of hostilities and armed conflict. Certainly, a consequence of the Arab spring has been to highlight how fragile many of these international norms are. I am not going to pretend that I have any amazing solutions to the crisis in Syria – everything seems like a pretty terrible option. But there can be no doubt that we should be standing up for the laws of war and demanding that Syria’s ‘allies’ (Russia and China) place pressure on Syria to respect international law. At a minimum this is the very least we – and they – can do. The right to deliver humanitarian assistance and the protection of aid workers has long been established in international law. And significantly, this includes UN Security Council Resolution 1502 which (having been adopted unanimously) both Russia and China voted for in 2003.


Abusers’ Peace?

In reaction to Charli’s provocative February 20 post on the constructivist peace, I left a number of questions in comments. Since more people read the blog than read the comments, I thought it was worthwhile to put them on the front page as a separate post.

Thus, if you are interested, click through. I’ve added a few pertinent links as well to create added value for people who already read the original comments.

I’m not familiar with the specific studies noted in Charli’s post, but do wonder about the scope and interpretation of the data — especially in regard to the “abusers’ peace.”

1. International war is a fairly rare event in the past 65 years, but civil war is far more common. What kinds of states are most likely to experience civil war?

The COW database, by the way, treats the Soviet and CIA/Pakistan interventions in Afghanistan as a civil war. By contrast, Vietnam is coded as an international war.

2. How much of the abusers’ peace might be explained by decades of Soviet hegemony over eastern Europe? Would Soviet hegemony reflect shared identity?

Following Rummel, what kinds of regimes practice democide?

3. Given that human rights abusers arguably long outnumbered non-abusers in international politics, and that international war itself is rare, isn’t any quantitative study using those variables bound to find a somewhat misleading abusers’ peace?

4. Is this really just a back-handed way of pointing out that the US, UK, and France (or NATO) have primarily intervened in states with poor human rights records — including former colonies? Most of the other recent mixed-dyad international wars seemingly involve Israel and its neighbors or India-Pakistan.

5. Just eye-balling the list of international wars, I see a number of abuser-abuser conflict dyads in recent decades: Iran-Iraq, Ethiopia-Somalia, China-Vietnam, Cambodia-Vietnam, and Uganda-Tanzania.

When these states were not fighting, did that reflect shared identity?

Does any hypothesized shared identity among abusers apply to the regime or the people?

6. How long before this thread leads us to the “clash of civilizations“?


All Male Soldiers are Rapists and all Female Soldiers are Weak Homewreckers: Fox News on Female Soldiers

I mostly try to let Fox News polemics slide past me like water off a ducks back. It was easy to dismiss Liz Trotta’s first rant about the proposed changes to the US military, which will allow more women into front-line positions (and recognize those women who are already in these posts) but the second iteration, in which she clarifies her position (and clearly reads a diatribe from a prompter) demands another interruption to my blogging hiatus. We should start with a briefing of Liz-isms, including: “hardline feminist,” “feminist biology,” and “feminist creed.” Let’s see if these become clear after a quick view of her main arguments:
1. The women and combat issue has “never gotten a fair and open hearing” and has instead been established as a “fait accompli” by “hardline feminists.”
2. These same hardline feminists have helped to fabricate “silly and dishonest fairy tales about women’s heroism in war” to support their case for removing the exclusion.
3. Biology is destiny and that men are facing “feminist biology” and having to work with weaker women.
4. Testosterone rules in war and that in closed combat “basic instincts” take over, which put women at risk.
5. Signs of abuse within the military are all too often used to support the “never enough bureaucracy of women victims within the armed forces.”

Let’s just leave her rant about pregnant women and the desecration of the American family aside for now and work with these 5. First, the women and combat issue has received almost as many open hearings as Fox has failed Republican hosts. Liz herself cites the 1991 Senate hearing on the issue and fails to note that the policy changes she is talking about came as a result of a commission initiated by Congress. Second, Trotta cites the Jessica Lynch fabrication as evidence that women’s participation in combat more generally has been essentially ‘made up.’ The Lynch debacle is something to take note of precisely because the fairy tale it created was one of female victimhood and male heroism. Why turn to this example of military propaganda when there is other evidence of women’s participation in combat- for example, women make up 16% of the fatalities in the Iraq and Afghanistan missions and several have won medals for their contributions to combat missions in Iraq. Trotta is right on the third point in the sense that women do measure up differently than men in physical standards tests. But, as reported in a previous blog, the military chose to have sex-specific testing- not because it wanted women to have lower standards, but as part of a recognition of physical difference and the requirements needed to test job capacity rather than meet the male standard. And PS Liz, biology isn’t destiny because according to experts like Maia Goodell, over 5% of women are kicking men’s butts on physical standards tests. The AVERAGE women has less upper body strength and endurance than men, but the military often attracts and creates above average female candidates. The fourth and fifth points that Trotta makes are the most troubling. This ‘basic instinct’ argument is a thrown back to prehistoric analysis of men as incapable of controlling their drive and their genitalia. The argument is insulting to men and ignores subsequent evidence that women and men can work in close proximity without men feeling obliged to rape. As for the sexual violence statistics- surely this is evidence of a major gender problem within the military rather than proof that women need to be kept out.
How did we get here Liz (I feel like we’re on a first name basis since you call feminists whatever you want)? What is your objective? Who are these crazy hardline feminists you speak of and why are you so cynical and dismissive of a “feminist creed” focused on “the right to choose, rights over one’s body etc” as you put it? Why are you and other Republicans like Santorum making this about family values rather than seeing it as a sign the changing reality of the US military (and others)? Australia, Canada and 12 other countries have NO restrictions on women in combat roles and the family structure has not disappeared, men do not rape every female in their proximity, and feminists have not overrun the countries with their irrational cries for respect, rights, and recognition.


Vietnam PS Impressions (2): GDR in the Jungle

Vietnam 013

Here is part one, where I noted how much the communist super-idolization of leaders like Ho and Mao weirds me out. Here are a few more social science impressions from our university trip:

4. What is it about communist states and concrete? Ech. It is so ugly and awful-looking. And it looks even worse and more out of place in the tropics. Mozambique and Vietnam look like the GDR in the jungle. The German Democratic Republic was architecturally hideous enough with its soulless, modernist-boxy, steel-and-concrete gigantism. Now drop that model into a third world tropical setting, and the outcome is even more awful-looking, not to mention dysfunctional and individuality-crushing. Vietnam and Mozambique both have terribly scarred their landscapes with countless square concrete box buildings that rise straight up out of the (otherwise attactive) rolling green countryside. They don’t fit the locale at all, not mention that they are often only half-built and/or decaying from all the saltwater in the tropical air. It looks atrocious. Good god. Couldn’t the Soviets do anything right? Did they have to export even lazy, style-less concrete boxes masquerading as ‘socialist realist’ architecture? Where’s Frank Lloyd Wright when you need him?

5. The Indo-Sinic collision in Vietnam makes the local art the most interesting I’ve seen yet in Asia. The national museum of fine art has (above) a wonderful serene Buddha, with his hands clasped and face placid (fairly typical) – plus 30 arms. Wow! That stopped me cold: Buddha + Vishnu = I have no idea. I can only imagine how the monks back in Korea (my wife is a Buddhist) would react. But it is truly unique, and I find Confucian art with all its rigid, formal wise men telling me to be a good son kinda boring. Bring on the wild Champa statuary with bodhisattvas who look like Hindu gurus and dancers with their legs backwardly touching their heads. Awesome.

6. ‘Please, let me make your trip to Vietnam as un-Vietnamese as possible.’ Ech. What is it about tour companies and cultural insulation? We ate most of our meals in Korean restaurants; we meet the Korean ambassador who told us how the Vietnamese have a ‘Korean dream’ and love Samsung; they served soju at every meal; we were shuffled around to souvenir shops explicitly built for Korean tourists where you could buy stuff that you could get at any mall in Hanguk-land, the staff spoke Korean, and even the owners were apparently Koreans; we didn’t even have to exchange any money! I guess flying Vietnamese Airways and eating some spring rolls was a major concession.

6. A few other random thoughts:

a. I never saw a Buddhist-Taoist-Confucian ‘fusion’ shrine anywhere before; again the art in Vietnam was surprisingly unique and engaging. People were half-bowing, full-bowing, waving incense. It was pretty hard to know exactly what to do (three half-bows usually works pretty well).

b. Remember your French textbook in high school telling you that Vietnam was in the ‘Francophonie’? Wrong. About the only French I could find was stuff left over on purpose. There were no exceptional signs or services. No one spoke it. I looked a lot. But English was the dominant foreign language. But there were almost no American tourists – about half Europeans and half Asians (Koreans and Chinese, no Japanese).

c. Yes, you can visit the Hanoi Hilton. Yes, it is extremely disturbing; you can even see the well-maintained flight suit John McCain was wearing when he was shot down and captured (another bizarre and uncomfortable tourist attraction). But post-Abu Ghraib, indignation feels hypocritical. It’s a very hard place to visit. The focus of the museum is on the French repression (complete with a preserved guillotine – very grim). Generally speaking, the Hanoi museums aren’t nearly as anti-American as you might expect. The ire is focused more on the French than us, and the bulk of the attention goes to Ho as a legendary founder like Lycurgus or George Washington.

I tweeted a series of these sorts of political science impressions from Vietnam here.

Cross-posted on Asian Security Blog.


More on Gotovina

Ante Gotovina

Last week I wrote about targeting and mentioned the Gotovina Case. This case has become interesting for those interested in international law and post-conflict justice because of the decision of the court (among other things) effectively states that a 4% error rate in targeting in a complex military operation was tantamount to a war crime.

As I said in the post, the decision prompted several laws of war scholars (many of whom were former JAGs) to have a roundtable at Emory University on the decision and subsequently write up an amicus brief  supported by 12 international law experts from the US, Canada and the UK which was submitted to the appeals chamber at the ICTY. This prompted a response from the prosecution which may be read here.

What I didn’t realize, however, was that the Court was deciding that day to reject the amicus. You can read their decision here.

I must admit that going through the Court’s decision does not inspire confidence. That the decision begins with a discussion about the word length is… like something I might write at the END of my comments on a student essay.

Next, in the brief “Discussion” of the merits of the arguments, the court briefly states that it is “not convinced that the applicants’ submissions would assist in determining the issues on appeal”, and invokes procedural rules for submitting evidence. It further states that the amicus brief is problematic because it does not identify the fact that one of the authors, Geoff Corn, was an expert witness for the defence. Given that this later point should have been pretty obvious and they are already lecturing the authors for going over the word limit, you wonder how this should have been done? Or why this is a matter of substance in deciding the merits of the worth of the amicus?

Either way, the Court uses these points to reject the amicus in a brief dismissal that I find wanting. Disappointingly, the amicus has been dismissed on rather procedural and technical grounds. And this is important: if international courts are going to be making controversial decisions suggesting that a 4% error rate is tantamount to a war crime and if they reject advice on this matter because someone didn’t explicitly attach a CV to an amicus that violated the 10% +/- rule, I am concerned. And you have to wonder what kind of message this send to countries thinking about signing up to war crimes courts/trials?

Regarding my post from last week, Geoff Corn responded in the comments to direct readers to his SSRN paper on the matter. I would definitely recommend interested Duck readers to take a look.

Clearly, Gotovina remains a case that should be closely watched. The man himself remains a controversial figure. Being concerned with his trial is not to say he is not guilty of some crimes. However, it is clear that many experts in this area are concerned about logic employed by the ICTY on several important aspects of the case and the future implications of war crimes trials.

I look forward to more reaction from the amicus authors and other scholars on this matter.


Tackling the challenges of Big Data

 This is what I’m talking about:

City University London’s Centre for Information Leadership is hosting this free symposium to bring together academics and practitioners from across industry to tackle the challenges posed by “big data” – the growing amount of information that needs to be stored, searched, analysed and visualised in the digital age.
The event aims to begin a dialogue across traditionally-siloed sectors, reflecting the fact that a wide range of organisations and individuals are struggling to cope with the explosion of data brought about by new technologies – from smart meters in the energy sector to behavioural targeting in online advertising.

Without theoretically informed reflection, the “challenges of Big Data” will become synonymous with a question of how to store those petabytes of data, not what they mean.

The Constructivist Peace: Shared Norms and Pacific Relations Among Human Rights Abusers

Timothy Peterson and Leah Graham recently published a study in the Journal of Conflict Resolution showing that, after you control for the democratic peace, similarities in human rights performance have an important effect on any two countries’ likelihood to go to war. The interesting caveat is that this finding holds true for states that abuse their citizens as well as those that don’t:

Although mutual norms of domestic non-violence are more pacifying than mutual disregard thereof, the authors argue that a wide disparity in norms is more aggravating than shared norms… that norm asymmetry is aggravating provides evidence for an “‘abusers’ peace…” Our results suggest the possibility for conflicts arising between newly democratic, human rights-supporting states and their more oppressive, authoritarian neighbors… It may be that installing an “outpost of democracy” within an authoritarian region and enforcing improved respsect for human rights on the domestic population will lead to increased regional violence.

Peterson and Graham are building on two earlier studies augmenting the democratic peace thesis by exploring the specific impact of human rights performance on war. IR scholars have long noted that democratic states almost never fight one another, but there is much debate over why. Although IR liberals have long treated “ideological commitment to human rights” as one of several “pillars” or indicators of the liberal peace, Mary Caprioli and Peter Trumdore showed that human rights performance alone is actually a good predictor of interstate violence even controlling for regime type. A separate study by David Sobek, M. Rodwan Abouharb and Christopher Ingram demonstrated that states with good human rights records, were more peaceful with one another regardless of democracy. Peterson and Graham’s study extends this finding in one more direction, arguing it is indeed dyadic norms that matter, but that there exists an “abusers’ peace” as well as a “human rights peace.”

The findings themselves should be critically analyzed and replicated further: among other problems they all rely on different and imperfect indicators of human rights performance (for a critique of quantitative data-sets on human rights see this article). However as a whole this line of research suggests two modest challenges to democratic peace theory.

First, it suggests that democratic institutions per se may be far less important in mitigating interstate war than a cluster of human rights norms that can include but are not limited to the “empowerment rights” associated with democracy. Second, it suggests that the causal mechanism translating adherence to these norms into pacific relations is not liberal but rather constructivist: a set of shared identities that can constitute shared interests among human rights abusers as well as champions, lessening the likelihood of violent conflicts. And at the level of policy, these studies do indeed encourage an emphasis on diffusing norms within neighborhoods rather than changing the regimes of specific states, if the goal is to achieve both rights and security.


In Defense of Particularism in American Foreign policy

[This is a cross-posting from Dart-Throwing Chimp.]

I’ve just finished reading John Lewis Gaddis’s terrific biography of George Frost Kennan, a towering figure in American foreign policy after World War II whom Henry Kissinger described as “one of the most important, complex, moving, challenging and exasperating American public servants.” Apart from recommending to the book, which I do without hesitation to anyone with an interest in world affairs, I wanted to talk about how Gaddis’ distillation of Kennan’s ideas helped me clarify some of my own thinking on the conduct of foreign policy.

Nowadays, discussions of grand strategy in U.S. foreign policy are usually framed as a battle between realism, which emphasizes power and encourages statesmen to focus shrewdly on their national self-interest, and liberal institutionalism, which emphasizes cooperation and encourages statesmen to build institutions that facilitate it. Kennan–who was not trained as an academic and apparently didn’t care much for formal theories of international relations–saw the same terrain from a different perspective, and I think his map may be the more useful one.

For Kennan, the crucial divide lay between universalists and particularists. Gaddis spells out this theme most clearly in his discussion of Kennan’s thinking about how the United States ought to respond to the successes of Communist revolutionaries in China in 1947. Mao’s gains posed an early test of the recently pronounced Truman doctrine, which had seemed to pledge the United States to do all it could to prevent Communist advances anywhere in the world. While Kennan was dismayed by that doctrine’s absolutist language, it overlapped with the containment strategy he had begun to advocate as a response to the global ambitions and aggressive nature he saw in the Soviet Union.

Even so, and despite loud calls in the U.S. to do whatever was necessary to defend Chiang’s regime, Kennan convinced Truman to provide only a bare minimum of support to the Nationalists. According to Gaddis (p. 299), Kennan had thought that

Americans had clung too long to the idea of remaking China, an end far beyond their means. The [State Department’s] Policy Planning Staff [which Kennan headed] should determine what parts of East Asia are ‘absolutely vital to our security,’ and the United States should then ensure that these remain ‘in hands which we can control or rely on.’

Kennan framed this recommendation within the need to choose between universal and particularist approaches in foreign policy. Universalism sought to apply the same principles everywhere. It favored procedures embodied in the United Nations and in other international organizations. It smoothed over the national peculiarities and conflicting ideologies that confused and irritated so many Americans. Its appeal lay in its promise to ‘relieve us of the necessity of dealing with the world as it is.’ Particularism, in contrast, questioned ‘legalistic concepts.’ It assumed appetites for power that only ‘counter-force’ could control. It valued alliances, but only if based on communities of interest, not on the ‘abstract formalism’ of obligations that might preclude pursuing national defense and global stability. Universalism entangled interests in cumbersome parliamentarism. Particularism encouraged purposefulness, coordination, and economy of effort–qualities the nation would need ‘if we are to be sure of accomplishing our purposes.’

Kennan’s recommendation on China seemed to contradict his own grand strategy, but this contradiction reflected his deeper beliefs about the importance of particularism. He understood that a Communist victory in China would be a setback for the U.S., but he didn’t think it would be a disaster, and he believed that even massive American assistance was unlikely to stop the Communists from winning.

In this history, I hear echoes of contemporary debates over the “responsibility to protect” (R2P) doctrine and whether or not the U.S. should intervene militarily in Syria to stop the mass atrocities occurring there. As in the arguments over China policy in the 1940s, universalists often make the case for intervention in Syria on both moral and strategic grounds. Mass atrocities are morally abhorrent, of course, but acting to stop or prevent them is also an essential function of America’s role as the producer and defender of a liberal global order, a universalist might argue, just as stopping Communism in its tracks was during the Cold War. In a recent call for more forceful U.S. action against Syria, Anne Marie Slaughter, a successor of Kennan’s as director of the State Department’s Policy Planning Staff, made just such a case. She wrote:

If you believe, as I do, that R2P is a foundation for increased peace and respect for human rights over the long term, that each time it is invoked successfully to authorize the prevention of genocide, crimes against humanity, grave and systematic war crimes, and ethnic cleansing as much as the protection of civilians from such atrocities once they are occurring, it becomes a stronger deterrent against the commission of those acts in the first place…If the U.S. says it stands behind R2P but then does nothing in a case where it applies, not only will dictators around the world draw their own conclusions, but belief in the U.S. commitment to other international norms and obligations also weakens, just at a time when the U.S. grand strategy is to expand and strengthen an effective international order. The credibility of the U.S. commitment to its own proclaimed values will also take yet another critical hit with every young person in the Middle East fighting for liberty, democracy, and justice.

After reading about his approach to China, it’s easy to imagine Kennan responding to this universalist argument by asking: “Yes, but how likely are we to succeed, and at what cost?”

To universalists, that kind of equivocation may seem immoral. Kennan, whom Gaddis portrays as a religious person and a philosopher, was not insensitive to these concerns. His rejection of universalism was not meant as a rejection of moral thinking. Instead, Kennan’s commitment to particularism was informed by his judgment that stark views about right and wrong were poor guides to foreign policy-making.

Could governments behave as individuals should? His preliminary conclusion, sketched out in his diary, was that politics, whether within or among nations, would always be a struggle for power. It could never in itself be a moral act…Foreign policy was not, therefore, a contest of good versus evil. To condemn negotiations as appeasement, Kennan told a Princeton University audience early in October [1953], was to end a Hollywood movie with the villain shot. To entrust diplomacy to lawyers was to relegate power, ‘like sex, to a realm in which we see it only occasionally, and then in highly sublimated and presentable form.’ Both approaches ignored the fact that most international conflicts were ‘jams that people have gotten themselves into.’ Trying to resolve them through rigid standards risked making things worse.” (p. 492)

As a frequent critic of the U.S. government’s attempts to provoke and promote democratic revolutions elsewhere–here and here are some blogged examples–I was particularly interested in how Kennan’s commitment to particularism was evidenced in his frustration with policies aimed at supporting the “liberation” Communist-ruled countries during the Cold War. In Kennan’s view,

“[A policy seeking ‘liberation’ in Communist-ruled countries] is not consistent with our international obligations. It is not consistent with a common membership with other countries in the United Nations. It is not consistent with the maintenance of formal diplomatic relations with another country. It is replete with possibilities for misunderstanding and bitterness. To the extent that it might be successful, it would involve us in heavy responsibilities. Finally the prospects for success would be very small indeed; since the problem of civil disobedience is not a great problem to the modern police dictatorship.” (p. 479)

Those concerns may sound cold, but Kennan was not indifferent to the liberationists’ cause. In fact, his views on the subject were also informed by a conviction that democracy would prevail in the end without active American support. According to Gaddis (p. 495), Kennan believed that

Democracy had the advantage over Communism in this respect, because it did not rely on violence to reshape society. Its outlook was ‘more closely attuned to the real nature of man…[so] we can afford to be patient and even occasionally to suffer reverses, placing our confidence in the longer and deeper workings of history.’

Like Churchill, who famously remarked that “democracy is the worst form of government except all the others that have been tried,” Kennan saw many faults in Western society in the 20th century, but he saw the available alternatives as even worse. Nevertheless, he firmly believed that any gains realized by pushing for liberation were not worth the entanglements, lost opportunities, and even wars that might result, especially when war could be nuclear.

Kennan saw himself as more of a “prophet” (his word) than a theorist or practitioner, and his views on “liberation” illustrate how he often thought about international relations on time scales that most people either don’t consider or consider a luxury. His containment policy was founded on the prescient expectation that the Soviet Union’s internal flaws would eventually lead to its own disintegration, but he did not expect to live long enough to see that happen.

When contemplating the plight of actual people suffering under actual dictatorships, the idea that democracy will eventually prevail can seem a little too convenient, like it’s just a way to absolve us of any responsibility for the injustices of the here and now. Is it really more convenient, though, than the belief that righteousness is always right? Where Kennan’s view is materially convenient, implying that we can achieve the desired result through inaction, the liberationist’s view is morally convenient, presuming that well-intentioned actions will always bring good results.

And there’s the matter of the historical record. Long-term trends clearly support Kennan’s expectation that democracy would keep expanding, albeit fitfully and with many reversals. More important, these advances have usually come either without direct U.S. support, or in places where U.S. involvement was incidental to the eventual outcome. The events that precipitated the collapse of the USSR and the end of Communist rule in Eastern Europe mostly caught the U.S. by surprise, and the U.S. response to them was generally modest and ambivalent.

Likewise with the Arab Spring. The wave of uprisings that swept the Arab world in 2011 started in Tunisia, where the U.S. had done virtually nothing to promote democracy. It soon spread to Egypt and Bahrain, where U.S. support for military “deep states” vastly outweighed its material and verbal commitments to opposition groups, and to Libya, where the U.S. had actually warmed to the dictator in recent years in response to his decision to give up weapons of mass destruction. In other words, theses revolutions were hardly American-made; if anything, they occurred in spite of American indifference and support for the status quo. In this sense, the Arab Spring supports Kennan’s expectation that American intervention is hardly a prerequisite for democratic revolution, and that democracy will advance on its own through the “longer and deeper workings of history.”

If universal principles aren’t the way to go, how, then, should foreign policy be conducted? For most of his adult life, Kennan owned and worked a small farm in southern Pennsylvania, and he often did the yardwork at his home in Princeton, too. It’s not surprising, then, that he may have best expressed his commitment to particularism and penchant for thinking on long time scales in a horticultural metaphor that envisioned a patient, process-oriented approach as the best way to strike a balance between moral ambitions and animal interests. This metaphor was offered up during a series of four lectures Kennan delivered at Princeton in 1954–lectures that became the book Realities of American Foreign Policy, and I think Gaddis’ summation of those lectures (pp. 494-495) it makes a proper coda to this post.

Americans could no longer afford economic advances that depleted natural resources and devastated natural beauty, Kennan insisted. Nor could they tolerate dependency, for critical raw materials, on unreliable foreign governments. Nor could they tear their democracy apart internally because threats to democracy existed externally. Nor could they entrust defenses against such dangers to the first use of nuclear weapons, for what would be left after a nuclear war had taken place? These were all single policies, pursued without regard to how each related to the others, or to the larger ends the state was supposed to serve. They neglected ‘the essential unity’ of national problems, thus demonstrating the ‘danger implicit in any attempt to compartmentalize our thinking about foreign policy.’

That lack of coordination ill-suited the separate ‘planes of international reality’ upon which the United States had to compete. The first was a ‘sane and rational one, in which we felt comfortable, in which we were surrounded by people to whom we were accustomed and on whose reactions we could at least depend.’ The second was ‘a nightmarish one, where we were like a hunted beast, oblivious of everything but survival; straining every nerve and muscle in the effort to remain alive.’ Within the first arena, traditional conceptions of morality applied; ‘We could still be guided…by the American dream.’ Within the second, ‘there was only the law of the jungle; and we had to do violence to our own traditional principles–or many of us felt we did–to fit ourselves for the relentless struggle.’ The great question, then, was whether the two could ever be brought into a coherent relationship with one another.

They could, Kennan suggested, through a kind of geopolitical horticulture. ‘We must be gardeners and not mechanics in our approach to world affairs.’ International life was an organic process, not a static system. Americans had inherited it, not designed it. Their preferred standards of behavior, therefore, could hardly govern it. But it should be possible ‘to take these forces for what they are and to induce them to work with us and for us by influencing the environmental stimuli to which they are subjected.’ That would have to be done ‘gently and patiently, with understanding and sympathy, not trying to force growth by mechanical means, not tearing the plants up by the roots when they fail to behave as we wish them to. The forces of nature will generally be on the side of him who understands them best and respects them most scrupulously.’  


Labels and tribes

In the Matrix, it’s trivial to specify the underlying
data-generating process. It involves kung fu.

 Given PTJ’s post, I wanted to clarify two points from my original post on Big Data and the ensuing comment thread.

I use quantitative methods in my own work. I’ve invested a lot of time and a lot of money in learning statistics. I like statistics! I think that the development of statistical techniques for specifying and disciplining our analytic approach to uncertainty is the most important development in social science of the past 100 years. My objection in the comments thread, then, was not to the use of statistics for inference. I’m cautious about our ability to recover causal linkages from observational data, but no more so than, say, Edward Leamer–or, for that matter, Jeffrey Wooldridge, who wrote the first econometrics textbook I read.

My objection instead is to the simple term “inferential statistics,” because the use of that term to describe certain statistical models, as opposed to the application of statistical models to theoretically-driven inquiry, often belies an unconscious acceptance of a set of claims that are logically untenable. The normal opposition is of “inferential” to “descriptive” statistics, but there is nothing inherently inferential about the logistic regression model. Indeed, in two of the most famous applications of handy models (Gauss’s use of least-squares regression to plot asteroid orbits  and von Botkiewicz’s fitting of a Poisson distribution to data about horses kicking Prussian officers), there is no inference whatsoever being done; instead, the models are simply descriptions of a given dataset. More formally, then, it is not the case that “inferential” describes a property of statistical models, but rather should be taken strictly to refer to their use. What is doing the inferential work is the specification of parameters, which is why it is sometimes entirely appropriate to have a knock-down fight over whether a zero-inflated negative binomial or a regular Poisson is the best fit for a given test of a given theory.

So, my objection on this score is narrowly to the term “inferential statistics,” which I simply suggest should be replaced by something slightly more cumbersome but much more accurate: “the use of statistics for inference.” What this definition loses in pedantry it gains in accuracy.

The second point is that my post about Big Data was meant to serve as a warning to qualitative researchers about what could happen if they did not take the promise of well-designed statistical methods for describing data seriously. My metaphor of an invasive species was meant to suggest that we might end up with a much-impoverished monoculture of data mining that, by dint of its practitioners’ superior productivity, would displace traditional approaches entirely. But the proper response to this is not to equate the use of statistical methods with data mining (as I think a couple of commenters thought I was arguing). Quite the contrary: It would be much preferable for historians to learn how to use statistics as part of a balanced approach than for historians to be displaced by purely data miners.

This is all the more relevant because the flood of Big Data that is going to hit traditionally qualitative studies will open new opportunities for well-informed and teched-up researchers who can take advantage of the skills that leverage the availability of petabytes of data. After all, the real enemy here for qual and quant researchers in social science is not each other but a new breed of data miner who believes that theory is unnecessary, a viewpoint best expressed in 2008 by Chris Andersen in Wired:

But faced with massive data, this approach to science — hypothesize, model, test — is becoming obsolete. … There is now a better way. Petabytes allow us to say: “Correlation is enough.” We can stop looking for models. We can analyze the data without hypotheses about what it might show. We can throw the numbers into the biggest computing clusters the world has ever seen and let statistical algorithms find patterns where science cannot.

I feel confident that no reader of the Duck wants to see this come to pass. The best way to head that off is not to adopt an thinking anti-statistical stance but rather to use those methods when proper in order to support a deeper, richer understanding of social behavior.

Experiments, Social Science, and Politics

[This post was written by PTJ]

One of the slightly disconcerting experiences from my week in Vienna teaching an intensive philosophy of science course for the European Consortium on Political Research involved coming out of the bubble of dialogues with Wittgenstein, Popper, Searle, Weber, etc. into the unfortunate everyday actuality of contemporary social-scientific practices of inquiry. In the philosophical literature, an appreciably and admirably broad diversity reigns, despite the best efforts of partisans to tie up all of the pieces of the philosophy of science into a single and univocal whole or to set perennial debates unambiguously to rest: while everyone agrees that science in some sense “works,” there is no consensus about how and why, or even whether it works well enough or could stand to be categorically improved. Contrast the reigning unexamined and usually unacknowledged consensus of large swaths of the contemporary social sciences that scientific inquiry is neopositivist inquiry, in which the endless drive to falsify hypothetical conjectures containing nomothetic generalizations is operationalized in the effort to disclose ever-finer degrees of cross-case covariation among ever-more-narrowly-defined variables, through the use of ever-more sophisticated statistical techniques. I will admit to feeling more than a little like Han Solo when the Millennium Falcon entered the Alderaan system: “we’ve come out of hyperspace into a meteor storm.”

Two examples leap to mind, characteristic of what I will somewhat ambitiously call the commonsensical notion of inquiry in the contemporary social sciences. One is the recent exchange in the comments section of PM’s post on Big Data (I feel like we ought to treat that as a proper noun, and after a week in a German-speaking country capitalizing proper nouns just feels right to me) about the notion of “statistical inference,” in which PM and I highlight the importance of theory and methodology to causal explanation, and Eric Voeten (unless I grossly misunderstand him) suggests that inference is a technical problem that can be resolved by statistical techniques alone. The second is the methodological afterword to the AAC&U report “Five High-Impact Practices” (the kind of thing that those of us who wear academic administrator hats in addition to our other hats tend to read when thinking about issues of curriculum design), which echoes some of the observations made in the main report on the methodological limitations of research on practices higher education such as first-year seminars and undergraduate research opportunities — what is called for throughout is a greater effort to deal with the “selection bias” caused by the fact that students who select these programs as undergraduates might be those students already inclined to perform well on the outcome measures that are used to evaluate the programs (students interested in research choose undergraduate research opportunities, for example), and therefore it is difficult if not impossible to ascertain the independent impact of the programs themselves. (There are also some recommendations about defining program components more precisely so that impacts can be further and more precisely delineated, especially in situations where a college or university’s curriculum contains multiple “high-impact practices,” but those just strengthen the basic orientation of the criticisms.)

The common thread here is the neopositivist idea that “to explain” is synonymous with “to identify robust covariations between,” so that “X explains Y” means, in operational terms, “X covaries significantly with Y.” X’s separability from Y, and from any other independent variables, is presumed as part of this package, so efforts have to be taken to establish the independence of X. The gold standard for so doing is the experimental situation, in which we can precisely control for things such that two populations only vary from one another in their value of X; then a simple measurement of Y will show us whether X “explains” Y in this neopositivist sense. Nothing more is required: no complex assessments of measurement error, no likelihood estimates, nothing but observation and some pretty basic math. When we have multiple experiments to consider, conclusions get stronger, because we can see — literally, see — how robust our conclusions are, and here again a little very basic math suffices to give us a measure of confidence in our conclusions.
But note that these conclusions are conclusions about repeated experiments. Running a bunch of trials under experimental conditions allows me to say something about the probability of observing similar relationships the next time I run the experiment, and it does so as long as we adopt something like Karl Popper’s resolution of Hume’s problem of induction: no amount of repeated observation can ever suffice to give us complete confidence in the general law (or: nomothetic relationship, since for Popper as for the original logical positivists in the Vienna Circle a general law is nothing but a robust set of empirical observations of covariation) we think we’ve observed in action, but repeated failures to falsify our conjecture is a sufficient basis for provisionally accepting the law. The problem here is that we’ve only gotten as far as the laboratory door, so we know what is likely to happen in the next trial, but what confidence do we have about what will happen outside of the lab? The neopositivist answer is to presume that the lab is a systematic window into the wider world, that statistical relationships revealed through experiment tell us something about one and the same world — a world the mind-independent character of which underpins all of our systematic observations — that is both inside and outside of the laboratory. But this is itself a hypothetical conjecture, for a consistent neopositivism, so it too has to be tested; the fact that lab results seem to work pretty well constitute, for a neopositivist, sufficient failures to falsify that it’s okay to provisionally accept lab results as saying something about the wider world too.
Now, there’s another answer to the question of why lab results work, which has less to do with conjecture and more to do with the specific character of the experimental situation itself. In a laboratory one can artificially control the situation so that specific factors are isolated and their independent effects ascertained; this, after all, is what lab experiments are all about. (I am setting aside lab work involving detection, because that’s a whole different kettle of fish, philosophically speaking: detection is not, strictly speaking, an experiment, in the sense I am using the term here. But I digress.) As scientific realists at least back to Rom Harré have pointed out, this means that the only way to get those results out of the lab is to make two moves: first, to recognize that what lab experiments do is to disclose cause powers, defined as tendencies to produce effects under certain circumstances, and second, to “transfactually” presume that those causal powers will operate in the absence of the artificially-designed laboratory circumstances that produce more or less strict covariations between inputs and outputs. In other words, a claim that this magnetic object attracts this metallic object is not a claim about the covariation of “these objects being in close proximity to one another” and “these objects coming together”; the causal power of a magnet to attract metallic objects might or might not be realized under various circumstances (e.g. in the presence of a strong electric field, or the presence of another, stronger magnet). It is instead not a merely behavioral claim, but a claim about dispositional properties — causal powers, or what we often call in the social sciences “causal mechanisms” — that probably won’t manifest in the open system of the actual world in the form of statistically significant covariations of factors. Indeed, realists argue, thinking about what laboratory experiments do in this way actually gives us greater confidence in the outcome of the next lab trial, too, since a causal mechanism is a better place to lodge an account of causality than a mere covariation, no matter how robust, could ever be.
Hence there are at least two ways of getting results out of the lab and into the wider world: the neopositivist testing of the proposition that lab experiments tell us something about the wider world, and the realist transfactual presumption that causal powers artificially isolated in the lab will continue to manifest in the wider world even though that manifestation will be greatly affected by the sheer complexity of life outside the lab. Both rely on a reasonably sharp laboratory/world distinction, and both suggest that valid knowledge depends, to some extent, on that separation. This impetus underpins the actual lab work in the social sciences, whether psychological or or cognitive or, arguably, computer-simulated; it also informs the steady search of social scientists for the “natural experiment,” a situation close enough to a laboratory experiment that we can almost imagine that we ran it in a lab. (Whether there are such “natural experiments,” really, is a different matter.)
Okay, so what about, you know, most of the empirical work done in the social sciences, which doesn’t have a laboratory component but still claims to be making valid claims about the causal role of independent factors? Enter “inferential statistics,” or the idea that one can collect open-system, actual-world data and then massage it to appropriately approximate a laboratory set-up, and draw conclusions from that.

Much of the apparatus of modern “statistical methods” comes in only when we don’t have a lab handy, and is designed to allow us to keep roughly the same methodology as that of the laboratory experiment despite the fact that we don’t, in fact, run experiments in controlled environments that allow us to artificially separate out different causal factors and estimate their impacts. Instead, we use a whole lot of fairly sophisticated mathematics to, put bluntly, imagine that our data was the result of an experimental trial, and then figure out how confident we can be in the results it generated. All of the technical apparatus of confidence intervals, different sorts of estimates, etc. is precisely what we would not need if we had laboratories, and it is all designed to address this perceived lack in our scientific approach. Of course, the tools and techniques have become so naturalized, especially in Political Science, that we rarely if ever actually reflect on why we are engaged in this whole calculation endeavor; the answer goes back to the laboratory, and its absence from our everyday research practices.
But if we put the pieces together, we encounter a bit of a profound problem: we don’t have any way of knowing whether these approximated labs that we build via statistical techniques actually tell us anything about the world. This is because, unlike an actual lab, the statistical lab-like construction (or “quasi-lab”) that we have built for ourselves has no clear outside — and this is not simply a matter of trying to validate results using other data. Any actual data that one collects still has to be processed and evaluated in the same way as the original data, which — since that original process was, so to speak, “lab-ifying” the data — amounts, philosophically speaking, to running another experimental trial in the same laboratory. There’s no outside world to relate to, no non-lab place in which the magnet might have a chance to attract the piece of metal under open-system, actual-world conditions. Instead, in order to see whether the effects we found in our quasi-lab obtain elsewhere, we have to convert that elsewhere into another quasi-lab. Which, to my mind, raises the very real possibility that the entire edifice of inferential statistical results is a grand illusion, a mass of symbols and calculations signifying nothing. And we’d never know. It’s not like we have the equivalent of airplanes flying and computers working to point to — those might serve as evidence that somehow the quasi-lab was working properly and helping us validate what needs to be validated, and vice versa. What we have is, to be blunt, a lot of quasi-lab results masquerading as valid knowledge.
One solution here is to do actual lab experiments, the results of which could be applied to the non-lab world in a pretty straightforward way whether one were a neopositivist or a realist: in neither case would one be looking for covariations, but instead one would be looking to see how and to what degree lab results manifested outside of the lab. Another solution would be to confine our expectations to the next laboratory trial, which would mean that causal claims would have to be confined to very similar situations. (An example, since I am writing this in Charles De Gaulle airport, a place where my luggage has a statistically significant probability of remaining once I fly away: based on my experience and the experience of others, I have determined that CDG has some causal mechanisms and process that very often produce a situation where luggage does not make it on to a connecting flight, and this is airline-invariant as far as I can tell. It is reasonable for me to expect that my luggage will not make it into my flight home, because this instance of my flying through CDG is another trial of the same experiment, and because so far as I know and have heard nothing has changed at CDG that would make it any more likely that my bag will make the flight I am about to board. What underpins my expectation here is the continuity of the causal factors, processes, and mechanisms that make up CDG, and generally incline me to fly through Schipol or Frankfurt instead whenever possible … sadly, not today. This kind of reasoning also works in delimited social systems like, say, Major League Baseball or some other sport with sufficiently large numbers of games per season.) Not sure how well this would work in the social sciences, unless we were happy only being able to say things about delimited situations; this might suffice for opinion pollsters, who are already in the habit of treating polls as simulated elections, and perhaps one could do this for legislative processes so long as the basic constitutional rules both written and unwritten remained the same, but I am not sure what other research topics would fit comfortably under this approach.
[A third solution would be to say that all causal claims were in important ways ideal-typical, but explicating that would take us very far afield so I am going to bracket that discussion for the moment — except to say that such a methodological approach would, if anything, make us even more skeptical about the actual-world validity of any observed covariation, and thus exacerbate the problem I am identifying here.]
But we don’t have much work that proceeds in any of these ways. Instead, we get endless variations on something like the following: collect data; run statistical procedures on data; find covariation; make completely unjustified assumption that the covariation is more than something produced artificially in the quasi-lab; claim to know something about the world. So in the AAC&U report I referenced earlier, the report’s authors and those who wrote the Afterword are not content with simply content to collect examples of innovative curriculum and pedagogy in contemporary higher education; they want to know, e.g., if first-year seminars and undergraduate research opportunities “work,” which means whether they significantly covary with desired outcomes. So to try to determine this, they gather data on actual programs … see the problem?

The whole procedure is misleading, almost as if it made sense to run a “field experiment” that would conduct trials on the actual subjects of the research to see what kinds of causal effects manifested themselves, and then somehow imagine that this told us something about the world outside of the experimental set-up. X significantly covarying with Y in a lab might tell me something, but X covarying with Y in the open system of the actual world doesn’t tell me anything — except, perhaps, that there might be something here to explain. Observed covariation is not an explanation, regardless of how complex the math is. So the philosophically correct answer to “we don’t know how successful these programs are” is not “gather more data and run more quasi-experiments to see what kind of causal effects we can artificially induce”; instead, the answer should be something like “conceptually isolate the causal factors and then look out into the actual world to see how they combine and concatenate to produce outcomes.” What we need here is theory and methodology, not more statistical wizardry.

Of course, for reasons having more to do with the sociology of higher education than with anything philosophically or methodologically defensible, academic administrators have to have statistically significant findings in order to get the permission and the funding to do things that any of us in this business who think about it for longer than a minute will agree are obviously good ideas, like first-year seminars and undergraduate research opportunities. (Think about it. Think … there, from your experience as an educator, and your experience in higher education, you agree. Duh. No statistics necessary.) So reports like the AAC&U report are great political tools for doing what needs to be done.

And who knows, they might even convince people who don’t think much about the methodology of the thing — and in my experience many permission-givers and veto-players in higher education don’t think much about the methodology of such studies. So I will keep using it, and other such studies, whenever I can, in the right context. Hmm. I wonder if that’s what goes on when members of our tribe generate a statistical finding from actual-world data and take it to the State Department or the Defense Department? Maybe all of this philosophy-of-science methodological criticism is beside the point, because most of what we do isn’t actually science of any sort, or even all that concerned with trying to be a science: it’s just politics. With math. And a significant degree of self-delusion about the epistemic foundations of the enterprise.


Even Uncle Ho’s Hand-Weights Contributed to the Revolution (1)

Vietnam 064
Our social science faculty association organized a trip to Vietnam last week. It was pretty fascinating. It was my first trip, and I don’t speak the language, so obviously I am qualified to generalize wildly about it now. As Gabriel Almond once quipped, ‘you should never generalize about a country until you’ve at least flown over it. So guess I meet that test at least. Here are some anecdotal, political science-y impressions:

1. Communist hagiography really freaks me out. I have now been to the ‘holy-site’ tombs of Lenin, Mao, and Ho Chi Minh, and they are some of the most bizarre human artefacts I’ve ever seen. (Kim Il Sung has one too. For analogous thoughts on Communist kitsch in NK, try this.) If you’ve never seen a communist mausoleum, you should visit at least one, especially if you are a political scientist. Modeled on the Lenin tomb of Red Square, Ho’s is a large, raised rectangular box, designed in hideously ugly Soviet-esque grey concrete. Ho is inside in-state – even though he explicitly wanted to be cremated (Lenin too wanted to be buried). And yes, they do refer to him as Uncle Ho to your face. Accompanying the mausoleum are two museums – and a gift shop in which you can buy Ho Chi Minh keychains and playing cards. Wait, what?!

I think attaching a gift-shop to a Marxist tomb (there was one after the Mao mausoleum tour too – I have a Mao Zedong tie-clip no less) captures the truly disturbing and contradictory bizarre-ness of these sites:

a. Communists aren’t supposed to believe in God, but these sites show they are basically catering to the religious impulse for legend and transcendence. In Russia, my host family told me that Stalin took God out of Heaven and placed him in Red Square. But doesn’t that violate the whole rationalist intent of Marx? Didn’t Marxists talk for years about how they were making socialism ‘scientific,’ with ‘iron laws’ and ‘stages’ of history and all that? Yet here is something like worship, another ‘opiate for the masses,’ complete with a cathedral with relics that tells the mythologized story to the masses, no? Doesn’t it fly completely in the face of Marxist ideology to build secular versions of religious stories and myths, complete with mimicry cults, ‘holy relics’ like Ho’s walking stick (pic above), and sacred sites like tombs?

b. On top of this ideological confusion is the transformation of these sites into tourist attractions for capitalist westerners. Gah! So not only do these things violate Marxist-Leninist basics of rationality by creating a new set of myths, they then get so widely disbelieved at home, that the only reason they stay is because foreigners will pay money to see them. Again, when I was in Russia, there was talk of finally burying Lenin, per his wishes, except that Moscow city opposed it because of the tourist value. Isn’t that the ultimate capitalist debasement of these famous anticapitalists? Which leads to…

c. You don’t go to actually fawn over Lenin or Ho (I imagine that the Vietnamese and Chinese hardly believe the ‘secular saint’ ideology anymore either). Instead you go to see the act of a cult of personality itself. Every detail becomes worthy of obsession, and the Ho one seems even thicker than Lenin or Mao’s. Right behind the Ho mausoleum is Ho’s presidential palace-cum-museum in which all sorts of personal stuff is retained – even his exercise handweights (also in the pic above) and used cars. (I read that in NK, they rope off benches were Kim Il Sung sat.) In short, the attraction of these sites for us is to see just how awful and perverse communism was in practice, not actually learn anything about Mao, etc. We go to see this completely freaky communist-quasireligious myth-making – and then buy Ho Chi Minh paperweights as Christmas gifts.

2. I guess the first thing you notice as a political scientist is not ‘socialism,’ but the rapid-developer feel of the place. Its evident as soon as you get off the plane, if only from the odor. Unless it rains, the air is always thick with ammonia and carbon; facemasks are everywhere. In fact, it was so bad, it activated my allergies and gave me headaches; it was worse than China, which is the worst to date I’ve experienced. Gridlock, a common curse among second-world developers, is extreme; Hanoi traffic is the most terrifying I’ve ever seen after Cairo. The density of Hanoi is extreme – not India, but close. The streets are filled with people selling everything imaginable. Like other Asian developers, there is a massive small retail sector of mom-and-pop corner stores selling textiles, toys, pirated discs, tchotchkes, home appliances and other gizmos, etc. Scooters are everywhere. Everyone seems busy and is talking on their cell phones. The bustle is palpable. This was in great contrast to what I saw in southern Africa. It seemed to me that we were looking at Korea 40 years ago, which general impression my colleagues confirmed to me.

3. The poverty did not look as bad one would expect from the numbers. Average GDP per capita is only $1000 per annum, but I was pleasantly surprised to generally see straight teeth and bones, healthy looking skin, reasonably middle class attire (jeans, tennis shoes, socks, baseball caps, etc), cell phones and headphones, scooters and bicycles, etc. Women wore make-up and heels. Even the cops were wearing Ipods. No one seemed to be living on the street, wearing everything they own, as is immediately evident in India; nor did we see any massive shack-slums as in Mumbai. Even in the countryside, where infrastructure was noticeably worse, this basically held-up. I imagine that deeper in the jungle and mountainous regions, like the central highlands, it is much worse. But Hanoi was more bustling, wealthy, and functional than places like Mumbai, much less Windhoek or Maputo. The difference between the countryside in Namibia and Vietnam was huge.

Part two will come in 3 days.

I tweeted a series of these sorts of political science impressions from Vietnam here.

Cross-posted on Asian Security Blog.


How to Be a Good Realist

I’m going to delve clumsily into IR Theory here, so I’d be grateful to get some feedback on the question of the ‘Realist’ minimum.

In a fascinating post recently on US-China relations, Stephen Walt wrote:

“First, as a good realist, I think that the basic state of Sino-American relations will be driven more by balances of power and configurations of interest than by the personalities of individual leaders. As I’ve noted before, if China continues to grow more powerful, Bejing and Washington will view each other with an increasingly wary eye and are likely to find more issues about which to conflict. A serious security competition — especially in East Asia — will be likely (which does not mean that war is inevitable or even likely, by the way). Again assuming China’s continued ascent, I’m guessing this will occur no matter who is in power in each country.”

Hang on. Are realists actually supposed to think that the personalities of leaders are marginal forces in world politics?

There are a number of difficulties here. Strictly ‘structural’ realists might believe that impersonal things like ‘balances of power’ are more often than not the engine that propels (or shapes and shoves) the world. But (from my recollection), even Kenneth Waltz didn’t straightforwardly take that view. But classical (or neoclassical) realists such as Colin Dueck, Gideon Rose, or Asle Toje surely are attentive to the things that can mediate between the world and the folk who wield power. Those things can be ideas, agents or contingencies.

After all, good realist commentators and theorists like Stephen Walt don’t write as though intervening variables matter little. Realists pay great attention to the figures (eg. Bismarck, or maybe more recently Deng Xiaoping) who succeeded in navigating their nations through the anarchy of the world. In so far as their personalities mattered, they were figures who interpreted the world around them coherently and applied power effectively. Realists often also make concrete recommendations about policy. Its not clear why they would prescribe policies if they were so deterministic to assume that states would behave only in structurally undifferentiated ways regardless of who was in power or what ideas they had. Or is it?

That’s not to say that realists should always privilege ‘personality’ as the main agent. Political elites conceivably can share a ‘common sense’ concept of interests drawn from a wider political culture (such as the GOP/Democrat Consensus on Israel that Walt has argued for). But if intervening variables can count, why not the views/assumptions/quirks of a powerful individual from time to time? The research agenda in this area would presumably be charting and explaining when and how individual personalities interacted with everything else (both ideational and structural), as well as the causal mechanisms behind these interactions, whether in the parsimonious and systematic form of some scholars, or the richer but less systematic ways of others.

Ultimately it is always hard to prove or falsify this kind of stuff. It relies eventually on counterfactuals. Can we be sure that a ruler other than Stalin would have resisted intelligence reports of Hitler’s imminent attack in 1941? Would a President less incuriously dogmatic than George W. Bush have responded sooner to growing evidence of an insurgency in Iraq? Of course, such counterfactuals raise their own problems – of explaining how someone not like Stalin or Bush would have been in the harness at that time. Its a slippery thing.

Regardless, we should resist the notion that Realists can only call themselves Realists if they privilege big impersonal forces as the dominant way of explaining behaviour. As Dan Nexon wrote a while back, (and in less sympathy than my own affection for neoclassical forms of Realism), being a Realist emphatically does not require believing that states consistently act rationally in their self-interest:

“What’s odd here is why realists would react to putatively self-defeating state policies as if they comprised some kind of anomaly. Almost all of their “timeless lessons” about international politics involve states screwing things up: provoking counter-balancing coalitions, trying to make collective security work, getting involved in irrelevant peripheral conflicts, and so forth. Moreover, their underlying theoretical architectures are, as we’ve already seen, compatible with a broad range of state behavior.”

So what does it mean to be a Realist? Could a better version, staked out in its neoclassical form, be that states (and non-states, for that matter), may screw up for a range of reasons. It is the anarchical system around them that has its own, dark and unforgiving rationality, that penalises self-defeating behaviour.


Friday Nerd Blogging

Cello/sci-fi/heavy-metal nerds, all my favorites rolled into one.


Challenges to Qualitative Research in the Age Of Big Data

Technically, “because I didn’t have observational data.”
Working with experimental data requires only
calculating means and reading a table. Also, this
may be the most condescending comic strip
about statistics ever produced.

The excellent Silbey at the Edge of the American West is stunned by the torrents of data that future historians will be able to deal with. He predicts that the petabytes of data being captured by government organizations such as the Air Force will be a major boon for historians of the future —

(and I can’t be the only person who says “Of the future!” in a sort of breathless “better-living-through-chemistry” voice)

 — but also predicts that this torrent of data means that it will take vastly longer for historians to sort through the historical record.

He is wrong. It means precisely the opposite. It means that history is on the verge of becoming a quantified academic discipline. That is due to two reasons. The first is that statistics is, very literally, the art of discerning patterns within data. The second is that the history that academics practice in the coming age of Big Data will not be the same discipline that contemporary historians are creating.

The sensations Silbey is feeling have already been captured by an earlier historian, Henry Adams, who wrote of his visit to the Great Exposition of Paris:

He [Adams] cared little about his experiments and less about his statesmen, who seemed to him quite as ignorant as himself and, as a rule, no more honest; but he insisted on a relation of sequence. And if he could not reach it by one method, he would try as many methods as science knew. Satisfied that the sequence of men led to nothing and that the sequence of their society could lead no further, while the mere sequence of time was artificial, and the sequence of thought was chaos, he turned at last to the sequence of force; and thus it happened that, after ten years’ pursuit, he found himself lying in the Gallery of Machines at the Great Exposition of 1900, his historical neck broken by the sudden irruption of forces totally new.

Because it is strictly impossible for the human brain to cope with large amounts of data, this implies that in the age of big data we will have to turn to the tools we’ve devised to solve exactly that problem. And those tools are statistics.

It will not be human brains that directly run through each of the petabytes of data the US Air Force collects. It will be statistical software routines. And the historical record that the modal historian of the future confronts will be one that is mediated by statistical distributions, simply because such distributions will allow historians to confront the data that appears in vast torrents with tools that are appropriate to that problem.

Onset of menarche plotted against years for Norway.
In all seriousness, this is the sort of data that should
be analyzed by historians but which many are content
to abandon to the economists by default. Yet learning
how to analyze demographic data is not all that hard,
and the returns are immense. And no amount of
reading documents, without quantifying them,
 could produce this sort of information.

This will, in one sense, be a real gift to scholarship. Although I’m not an expert in Hitler historiography, for instance, I would place a very real bet with the universe that the statistical analysis in King et al. (2008) , “Ordinary Economic Voting Behavior in the Extraordinary Election of Adolf Hitler,” tells us something very real and important about why Hitler came to power that simply cannot be deduced from the documentary record alone. The same could be said for an example closer to (my) home, Chay and Munshi (2011), “Slavery’s Legacy: Black Mobilization in the Antebellum South,” which identifies previously unexplored channels for how variations in slavery affected the post-war ability of blacks to mobilize politically.

In a certain sense, then, what I’m describing is a return of one facet of the Annales school on steroids. You want an exploration of the daily rhythms of life? Then you want quantification. Plain and simple.

By this point, most readers of the Duck have probably reached the limits of their tolerance for such statistical imperialism. And since I am a member in good standing of the qualitative and multi-method research section of APSA (which I know is probably not much better for many Duck readers!), who has, moreover, just returned from spending weeks looking in archives, let me say that I do not think that the elimination of narrativist approaches is desirable or possible. Principally, without qualitative knowledge, quantitative approaches are hopelessly naive. Second, there are some problems that can only practically be investigated with qualitative data.

But if narrativist approaches will not be eliminated they may nevertheless lose large swathes of their habitat as the invasive species of Big Data historians emerges. Social history should be fundamentally transformed; so too should mass-level political history, or what’s left of it, since the availability of public opinion data, convincing theories of voter choice, and cheap analysis means that investigating the courses of campaigns using documents alone is pretty much professional malpractice.

The dilemma for historians is no different from the challenge that qualitative researchers in other fields have faced for some time. The first symptom, I predict, will be the retronym-ing of “qualitative” historians, in much the same way that the emergence of mobile phones created the retroynm “landline.” The next symptom will be that academic conferences will in fact be dominated by the pedantic jerks who only want to talk about the benefits of different approaches to handling heteroscedasticity. But the wrong reaction to these and other pains would be kneejerk refusal to consider the benefits of quantitative methods.


The Cultural is Political

Recently, Mike Innes tweeted playfully that he feared the Duck had become a “creative writing” blog due to the proliferation of satirical posts about pop cultural topics. This tweet was in response to my refutation of Brian Rathbun’s (also satirical) assertion that nerd / metal-head subcultures in US society are mutually exclusive, a post which also included critiques of the genre-specificity of pop cultural research in IR, commentary on a recent documentary about heavy metal music (itself quite political) and a satirized commentary about the strictures of institutional rules and norms on the identities and research agendas of political science professors.

Admittedly, the post did not deal with any foreign policy issues per se.

Mike’s implication (confirmed in a second tweet) appeared to be that blogging about pop culture, or blogging as pop culture (that is as satire rather than as serious analysis) is not “real” political blogging. However much he may have been teasing, this got me thinking about what we mean as political scientists when we think or write or teach about popular culture as opposed to policy processes, and especially when we produce ‘creative’ products ourselves as political scientists versus what we consider ‘scholarly’ political science outputs. (Because apparently a blog post is ‘scholarly’ if it reflects a certain style of writing or addresses certain themes but is ‘creative’ if it deals with other themes or with similar themes using satire rather than social science jargon.)

In this post, and later this Spring at the International Studies Association Conference, I will argue that culture is politics; and that analyses that blend attention to culture with concern over conventional political and policy issues are particularly appropriate on blogs precisely because they are relatively neglected in the discipline (though, this is changing). However, in thinking through this claim, and in watching the comments thread on Megan’s fantastic gender-violence-fetishism post, I realize that one can mean very different things by “culture is politics.” Taking cultural products seriously, examining the politics by which culture is produced, and creating cultural products ourselves are three different roles political scientists can play as bloggers.

For example, taking culture seriously as a carrier of political values and norms is supremely important to what we do, and has been at least since the “cultural turn” in IR in the late 1980s. Of course by “culture” IR scholars used to mean things like nationalist narratives, religion, or gender norms. Feminist IR scholars have long shown how the stories societies tell themselves and representations they create not only about war and peace but also about more mundane things like sex and soup shape not only society but also foreign policy. And it wasn’t long before the lens was turned toward pop culture as well by the work of Juttes Weldes and others: literature carries these narratives, cartoons and comics do, but so too does TV, film, and music.

(The intersectionalities of creative writing, political action and policy processes these have a politics and a history, often forgotten. Take Dr. Seuss for example: we remember him for his children’s books, which have helped carry American values both throughout our culture and globally, but he got his start drawing political cartoons: World War II in some respects created him as a writer and artist.)

Today, classes on “Film and Politics” are proliferating in political science departments, and with good reason: political scientists are rightly interested not only in how cultural products like films and cartoons represent politics, but also in the causal and constitutive impact of those representations on actual political processes. While there has been less research (that I’m aware of) by political scientists into musical genres and politics, this only suggests a new niche for aspiring political scientists that needs to be filled – like other cultural niches whose political implications have been insufficiently explored, like fashion (but see Cynthia Enloe‘s work on militarism) or food (but see Ansell and Vogel’s work on beef) or sports (but see Tomlinson and Young on national identity and international sports events.)

A second strand of “pop-cultural” analysis here at the Duck (and in the discipline, as a few of the cites above suggest) concerns the politics of cultural industries – rather than analyzing representations in cultural products (like Harry Potter or Game of Thrones) we sometimes analyze, for example, representations in culture-industrial sites (like the Grammys or the Browncoats movement) or we sometimes look at the intersection of the two: how culture-industrial actors sometimes function intentionally as political actors through celebrity diplomacy of different types. In the field of IR, there is more and more literature across methodological divides that deals with these topics – from John Street’s conceptual treatment of ‘celebrity politicians’ to Huliaris and Tzifakis’s case studies on celebrity activism to James Fowler’s elaborate empirical analysis of the Colbert Bump.

But finally there is the manner in which, especially on Fridays, bloggers at the Duck post more light-hearted or creative cultural products of our own loosely related to the topics we study – our version of casual Fridays which manifest at other blogs as pictures of cats, children or squid. Here at the Duck you won’t find squid, but you may find polar bears. Sometimes this is truly “casual” blogging and sometimes we put significant creative effort into playfully blending cultural critique, political analysis and satire. It’s true that it’s sometimes hard to tell where political science leaves off and tomfoolery begins. But then, isn’t the very definition of ‘tom-foolery’ socio-politically constructed? Yes.

Though I don’t often give it much thought, if asked to think about it I guess I’d tend to be a fairly loose constructionist myself on which of these roles most befits political science bloggers any day of the week, but I suppose there is room for disagreement there. What do readers think?


China & Snyder’s “Myths of Empire” (2): Does China fit the Model?

Here is part one, where I argued that China reasonably fits the prerequisites for Jack Snyder’s theory from Myths of Empire. Here is an application of it to China to see how it works:
2. Contra Snyder, China’s modernization is being led by the state and the party, not as much by the military. That’s true. But clearly the PLA does have something of that ‘state within a state’ feel of Germany and Japan’s military. The PLA’s budget has exploded over the last two decades, and like other second world militaries (Egypt, Pakistan), the PLA has lots of business interests (including casinos I’ve even heard) that suffer little state oversight.

3. There is pretty clear oligopolization of the economy. And if anything, it is getting worse recently with the return of SOEs in China (still 80% of the economy). Back in the 90s, everyone seemed to expect these would die out, but they are hanging on and have made a comeback recently. And clearly the government has its hands on the commanding heights (both old, like heavy manufacture, and new, the banking industry). However, Snyder’s liberal counter-coalition – cosmopolitan exporters interested in good ties with foreigners and workers interested in welfare over warfare – does have clear push-back in China. For all the corruption and rent-seeking coming out the China’s growth, the export lobby has clearly played a restraining role against the PLA and ideologues of CCP on foreign policy.

4. Also contra Snyder, China is not lead by an imperial coalition as openly as Germany or Japan was. As usual in such states, the military plays a big role, and lots of people think the PLA is on the rise, given China’s unexpected belligerence in the last few years. I agree. But the CCP still plays a huge role; I still think it is the coalition broker, even if it slipping post-Deng and relative to the PLA. I don’t think anyone would see China today as ‘cartelized’ as the USSR in the late 70s/early 80s, or as Germany in the decade before WWI. China’s exporters especially have a big voice, as increasingly does the banking industry, if only because of all those dollars (reserves and T-bills) whose value will evaporate if there’s a real war. Following Snyder though, China is neither an democracy, nor a unitary executive. Its politics is dominated by big, often rent-seeking, interest groups, all of them in bed with the government, and with growing influence for the military. The question is whether Snyder’s military-heavy industry coalition will clearly emerge from this tangle and push foreign policy in a more aggressive direction.

5. There has been pretty serious ideologically nationalist grooming of the population. (On the patriotic education campaign try this and this.) The old Maoist CCP didn’t stress Confucius or Chinese history that much (that was feudalism), nor the newly ‘found’ 100 years of humiliation (there, but not central to the ideology). Also, the incessant Japanophobia, with the special attention to the Nanking massacre, is new (however justified – the Japanese experimented with chemical weapons on the Chinese). If you go to Tiananmen Square now, it’s all nationalism not Mao. There’s huge (the biggest I’ve ever seen) TV monitor on-site running a continuous loop of heroic imagery of China, including what has to be the biggest flag waving heroically in the sunshine over windswept mountains that the world has ever seen. If you didn’t know China was the most awesomest place ever that got unfairly stomped on by Japan and the West, a visit to Tiananmen will cure you of your American foolishness.

The question is how much of this is directed internally, to prevent ‘splittism’ (my favorite Chinese neologism), and how much is actually indoctrinating the Chinese population into Snyder’s outward-oriented imperialism. But the former may bleed into the later; clearly, the CCP is playing with fire in teaching young Chinese in this way. In 2005, when Chinese students attacked the Japanese embassy, a lot of people thought the government let it go on for a few days, because the students were expressing their new patriotism. There was similar thinking regarding the Chinese flaps with Japan in the last few years over the coast guard ramming and Diaoyu/Senkaku. This is Snyder’s ‘blowback’ – an ideology rolled out to defend the current ruling coalition (post-Tiananmen Square crackdown, post-communist nationalism as justification of continued CCP rule after the Cold War) becomes a real force in politics in the next generation, because they actually believe this stuff.

6. China is being semi-encircled as Snyder would suggest. Its behavior in the last few years scared everybody. Certainly the Chinese students I have taught are convinced that China is being encircled and that the US is probably behind it. Like Germany and Japan, China has few allies; it impresses, but doesn’t persuade. Its also probably true that a break-out war would badly isolate China, generate an enormous counter-coalition (including likely even Russia and India), and would result in a major defeat that would entrain independence for its periphery, most obviously Tibet.

In sum, it seems to me the fit with Snyder is mixed at best:

a. Chronologically and ideologically, China should be a late, late developer, but it is better classified as a late developer. I wonder how Snyder could fit that in?

b. As late developer, it’s not really generating a powerful imperial coalition at the
 top as Snyder says it should. There is some cartelization, yes. SOEs are rent-seekers closely connected to the state, they dominate heavy industry, and they are protectionist-mercantilist. And the PLA is growing in importance and independence (mostly because the party is declining as a broker). But the party and exporters especially add diversity to that top coalition, and I am not sure Snyder can explain that in context of late development politics.

c. There is a virulent nationalist ideology being consciously and instrumentally distributed by elites for legitimizing reasons. The patriotic education campaign almost perfectly fits Snyder in both content (nationalist hysteria and foreign enemies) and purpose (buttressing the dominant coalition against domestic liberal opponents and an expansion of the franchise). Super prediction there. But the question is whether that nationalism is directed inward to buttress the CCP leadership role, or outward as the collection of strategic myths about expansion that Snyder (borrowing from van Evera) lists: offensive détente, enemies as paper tigers, cumulatively paying conquest, bandwagoning, etc. I am not a huge expert in the campaign, but it seems more worried about Chinese internal issues (development, splittism, do what the party tells you to do), than about offensive imperial myths and the value of expansion.

d. So my sense is that Snyder’s model predicts a nasty imperialist oligarchy atop China’s cartelized politics telling its citizens that China must expand against its growing list of enemies keeping it from its place in the sun. This should eventuate in a break out-war against a coalition including the US, Japan, SK, Australia, and maybe India and Russia. But I don’t really see a full-throated imperial coalition at the top of China now (I guess Snyder could argue that it is coming soon and that signs suggest that emergence), nor am I sure that the myths the CCP is distributing are offensive imperialist, so much as internal nationalist (although Snyder might argue that the later eventually leads to the former).

Cross-posted at Asian Security Blog.


Targeting…targeting: What are reasonable expectations?

Blue moon, you targeted me standing alone…

Yesterday Charli wrote a post on whether or not those opposed to the use of drones should use the concept of “atrocity law” instead of “war crimes” or human rights violations.

I wonder if others who generally oppose “targeted killings” think the concept of “atrocity law” might be a more useful way of framing this problem publicly than talking about “war crimes” or “human rights” specifically – concepts that by their nature draw the listener’s attention to a legal regime that only partially bears on the activity in question and invites contrasting legal views drawn from contrasting legal regimes.

Charli asks this question given that:

I think there is significant and mounting evidence of normative opposition to the targeted killings campaign (regardless of arguments some may make about its technical legality under different legal traditions), and according to even the most conservative estimates it meets the other criteria of a significant number victims and large-scale damage. No one can doubt it’s highly orchestrated character.

I’m going to go with “no” on these questions. First, unlike Charli, I’m not certain there is “mounting evidence of normative opposition to the targeted killings campaign” in anything other than the protests of a relatively insular group of legal-academics-activists (Phil Alston et al) who tend to be critical of these kinds of things anyway. In previous posts I have raised doubts about whether or not we can determine if targeted killing is effective, and how some activities have challenged and changed legal framework for the War on Terror. However, if anything, I think there is growing consensus within the Obama administration that the program works, it is effective and I think it is popular.

Additionally, I do not see how invoking the term “atrocity” will get us beyond many of the political problems involved in invoking other terms like “human rights law” or “war crimes”. If anything, “atrocity” seems to be an even less precise, more political term.

However, I think this conversation points to a third, larger issue that Charli is mostly concerned with – civilian death in armed conflict. Or, to put it another way – What expectations may we reasonably seek to place on our states when they carry out military actions? Those who write, research and teach on international law typically anchor their discussions in the legal principles of proportionality, necessity and distinction. However, these are notoriously vague terms. And, as such, when it comes to drones, many argue that these legal principles are being undermined.

In thinking about this question, I’ve been reminded of the recent controversy over the decision of the International Criminal Tribunal of the former Yugoslavia in the Gotovina Case. In it, the Court ruled that a 4% error rate in targeting in a complex military operation was tantamount to a war crime. Four percent.

Was this a reasonably conclusion for the ICTY to make? Are militaries (and the military in question here was not a Western military dealing with high-tech military equipment) really expected to do better than a 96% accuracy rate when it comes to targeting? And if so, on what grounds can we (or the Court) say this is the case? And, bringing this back to Charli’s post, would we benefit from thinking about a 4% error rate in terms of “atrocity”?

There are two very good summaries of the case at Lawfare and IntLawGrrls for more background information on the case. Some concerned former military professionals (many of whom are now professors) – admittedly, another insular group of legal-academics-activists of a very different source – have put together an Amicus Brief for the Gotovina Appeal which is well worth reading.

However, immediate questions of legality aside, I think this raises a larger question as to what we can reasonably expect from military campaigns, especially what levels of accuracy. Are all civilian deaths “atrocity”? Historically, the laws of war have said no – that proportionality may sometimes render it permissible (if no less regrettable). And I believe that all but the most ardent activists would agree with this historically rooted position. But it is clear that our perceptions of reasonable death rates have changed since the Second World War. So the question is what governs our ideas about proportionality and civilian deaths in an age of instant satellite imagery, night vision and precision guided weaponry? Unfortunately, I’m not sure the drone debate has given us any useful answers nor the basis to produce them.

I appreciate that there are important differences here – the military is, in theory, a hierarchical chain of command that is obliged to follow the laws of war. The CIA (who carries out the drone program) are civilians who do not meet these expectations and their status in law is questionable. But status here is not the issue (at least for this blog post and how it relates to Charli’s concerns). Instead, it is whether and at what point civilian deaths may be considered “atrocity”, on what basis we can and should make that decision and whether that language would make any useful or practical difference.

There is no doubt that recent move to a “zero-civilian death” or high expectations of few casualties has been rapid. Certainly it is at least part of the increased legal activity by governments, IGOs and NGOs in the realms of international law and the laws of war. However, I think it is also the result of a false promise that better technology can allow us to have “clean” wars. It is a promise that is made by governments to their populations, but one that has also clearly influenced activists in terms of their expectations – whether they are set in terms of laws, rights or atrocity.


Whitney Houston, Chris Brown, and Grammy Irony

image taken from

This Sunday the 2012 Grammy Awards attracted more attention than normal due to the untimely passing of Whitney Houston on the eve of the awards show.
During the Sunday night event, numerous artists dedicated their award to Houston or mentioned her amazing talents and the loss her death will mean to the industry.
Interestingly, running counter to this somber dedication theme of the evening was a notable counter story: the ordained comeback of Chris Brown’s career. Chris Brown was made infamous in 2009 when he was charged with beating his then girlfriend Rihanna. Images of a brutalized Rihanna surfaced across the web and Brown’s skyrocketing career was effectively snuffed out with big names in the business like Jay-Z and Kanye refusing to associate with the artist.
But that was 2009 and this is 2012. Since the incident Brown has had a subsequent album that rose to the top of the charts. He’s back in favor with key R&B players, and is largely viewed as one of R&B’s sexiest males ( nominated him the hottest male solo artist in 2010).
The 360 turn-around for Brown culminated at the Grammys on Sunday, where he performed alongside the other industry top-players, and won for best R&B album.
There are several troubling aspects of these counter-themes to Grammys.
First, that a man who was publicly associated with domestic abuse would be so generously celebrated at the same awards show that made tribute to Whitney Houston, a woman who herself suffered a public battle with domestic abuse from her former husband Bobby Brown.
Second, the music industry’s general amnesia or hypocritical acceptance of an artist it chose to shun just three years ago- what about all the hype in 2009 about sending a message about violence and respecting women?
Finally, what’s most concerning has been some of the unexpected responses to, and defense of, Chris Brown’s return- including a surge in women not only supporting him, but also sending tweets about their desire to ‘be beaten’ by him (see the following summary of tweets if you want to be completely dismayed).
What does this all mean about the state of domestic abuse generally, and the music industry and its promotion of womanizing, degrading, and violent lyrics and artists? Does no one connect Houston’s drug abuse to her experience of domestic abuse and her tumultuous private life? I don’t look to awards shows to stand as moral beacons, but I do think it is worth considering these counter Grammy narratives as a signal of the state of popular culture and gender relations at the moment.

« Older posts Newer posts »

© 2019 Duck of Minerva

Theme by Anders NorenUp ↑