Author: Drew Conway

On The Ecology of Human Insurgency

Charli highlighted the recently published work of Sean Gourley in Nature on the patter of frequency and magnitude of attacks in insurgencies, so I wanted to cross-post my critique of this work to initiate a discussion here at Duck.

nature08631-f4.2.jpgThe cover story of this month’s Nature features the work of a team of researchers examining the mathematical properties of insurgency. One of the authors is Sean Gourley, a physicist by training and TED Fellow, and this work represents the culmination of research by Gourley and his co-authors—a body of work that I have been critical of in the past. The article is entitled, “Common ecology quantifies human insurgency,” (gated) and the article attempts to define the underlying dynamics of insurgency in terms of a particular probability distribution; specifically, the power-law distribution, and how this affects the strategy of insurgents.

First, I am very pleased that this research is receiving such a high level of recognition in the scientific community, e.g., Sean tweeted that this article “beat out ‘the new earth’ discovery and the ‘possible cancer cure’ for the cover of nature.” Scholarship on the micro-level dynamics of a conflict is undoubtedly the future of conflict science, and these authors have ambitiously pushed the envelope; collecting an impressive data set spanning both time and conflict geography. Bearing in mind the undeniable value of this work, it is important to note that several claims made by the authors do not seem consistent with the data, or are at least require a dubious suspension of disbelief.

In many ways I reject the primary thrust of the article, which is that because the frequency and magnitude of attacks in an insurgency follows a power-law distribution this somehow illuminates the underlying decision calculus of insurgents. Without belaboring a point that I have made in the past, the observation that conflicts follow a power-law is in no way novel, and I am disappointed that the authors failed to cite though I am encouraged that the authors did cite the seminal work on this subject (thank you for pointing out my errata, Sean). The data measures the lethality and frequency of attacks perpetrated in the Iraq, Afghanistan, Peru and Colombia insurgencies, but the connection between this and the strategy of an insurgent is missing.

The authors’ primary data sources are open media reports on attacks; therefore, their observation simply reveals that open-source reporting on successful insurgent attacks follows a power-law. There are two critical limitations in the data that prevent it from fully answering the questions posited by the authors. First, there is some non-negligible level of left-censoring, i.e., we can never attempt to quantify the attacks that are planned by insurgents and never carried out, or those that are attempted by fail (defective IEDs, incompetent actors, etc.). Although they do not inflict damage, these attacks a clearly byproducts of insurgent strategy, and therefore must be present in a model of this calculus. Second, while the authors claim to overcome selection bias by cross-validating attack observations, this remains a persistent problem. Consider the insurgencies in Iraq and Afghanistan; in the former most of the attacks occurred in heavily populated urban areas, garnering considerable media coverage. In contrast, Afghanistan is largely a rural country, where the level of media scrutiny is considerably lower, meaning that media outlets there are inherently selective in what they report, or most reports are generated by US DoD reporting. How do we handle the absence of attack observations for Afghan villages outside the purview of the mainstream media?

The role of the media is central to the decision model proposed by the authors, which is illustrated in the figure above. Again, however, this presents a logical disconnect. As the figure describes, the authors claim that insurgents are updating their beliefs and strategies based on the information and signals they receive from broadcast news, then deciding whether to execute an attack. For lack of a better term, this is clearly putting the cart before the horse. The media is reporting attacks, as the authors’ data clearly proves; therefore, the insurgents’ decision to attack is creating news, and as such insurgents are gaining no new information from media reports on attacks that they themselves have perpetrated. Rather, the insurgents retain a critical element of private information, and are updating based on the counter-insurgency policies of the state—information they are very likely not receiving from the media. The framework presented here is akin to claiming that in a game of football (American) the offense is updating their strategy in the huddle before ever having seen how the defense lines up. Without question updating, in football both sides are updating strategy constantly, but it is the offense that dictates this tempo, and in an insurgency the insurgents are on offense.

This interplay between an insurgency and the state is what must be the focus of future research on the micro-dynamics of conflict. From the perspective of this research, a more novel track would be to attempt to find an insurgency that does not follow a power-law; but rather a less skewed distributions, such as the log-normal or a properly fit Poisson. Future research may also benefit from examining the distribution of attacks in the immediate or long-term aftermath of a variation in counter-insurgency policy. After addressing some of the limitations described above, such research might begin to identify the factors that contributed to why some counter-insurgency policies shift the attack distribution away from the power-law. The key to any future research; however, is to connect this to the context of the conflict in a meaningful way.

Again, congratulations to Sean and his team, I hope their piece will initiate a productive discussion in both academic and policy arenas on the methods and techniques for studying the micro-dynamics of conflict.

Photo: Nature

Share

The External Validity of Terrorism Studies on Israel/Palestine

The growing desire to understand both the rationality of suicide terrorism, as well as test theoretical concepts empirically has generated several interesting political economic studies of terrorism. As such, a recent paper in the NBER caught my eye for several reasons. The article, entitled “The Economic Cost of Harboring Terrorism,” adds to this body of work by focusing on an area that has yet to be explored. Very often the question of interest in these studies is, “how do terrorist attacks affect the target economy?” In this paper the authors reverse the question and ponder, “how do terrorist attacks affect the economic conditions of the area from whence the attack came?”

The question is a very good one, and the authors investigate it with a unique data set:

Our analysis overcomes these difficulties by relying on a detailed data set of suicide terror attacks and local economic conditions together with a unique empirical strategy. The available data set covers the universe of suicide Palestinian terrorists during the second Palestinian uprising, combined with quarterly data from the Palestinian Labor Force Survey on districts’ economic and demographic characteristics, and Israeli security measures (curfews and Israeli induced Palestinian fatalities).

The punchline…

…a successful attack causes an immediate increase of 5.3 percent in the unemployment rate of an average Palestinian district (relative to the average unemployment rate), and causes an increase of more than 20 percent in the likelihood that the district’s average wage falls in the quarter following an attack. Finally, a successful attack reduces the number of Palestinians working in Israel by 6.7 percent relative to its mean. Importantly, these economic effects persist for at least two quarters after the attack.

While I think this paper introduces a very important research paradigm, I have a concerns with some of the technical assumptions built into their analysis, and the overarching reliability of research focusing exclusively on terrorism in the Israel/Palestine conflict. With respect to the technical assumptions there is one line in the paper that struck me as very problematic: “Our empirical strategy exploits the inherit randomness in the success or failure of suicide terror attacks as a source of exogenous variation to investigate the effects of terrorism on the perpetrators economic conditions.”

I find it very difficult to accept the notion that success and failure is random across suicide attacks—especially within this particular conflict. There is clearly no support for a theory that selection of suicide attack sites is random; therefore, it follows that the success of an attack would also be a function of both the selected target as well as the learning process occurring by both the attackers and defenders. There is, therefore, an expectation of high autocorrelation across success for attacks happening within a relatively small geographic area. Such difficulties highlight the general problem of external validity for terrorism studies that focus solely on the Israel/Palestine conflict.

It is not surprising that researchers often default to data on terrorist attacks from this conflict. Given the relative openness of Israel’s democratic government, the media attention on Palestine, and the—unfortunate—frequency of attacks there exists are large amount of data from this conflict. As I have mentioned before, however, it is very difficult to infer causality from this data given the natural interconnectedness of the conflict dynamics. As I mentioned, there any large-N study of terrorism in this context has enormous selection problems, as terrorists learn innovate to evade the defensive tactics of the ISF, and the Israelis create new policies that may provoke and dissuade the terrorist activities. There are no other ongoing low-intensity conflicts that have issues at this level, making it difficult to draw parallels between findings from research focusing and Israel and Palestine and another other conflict.

I am curious as to others’ thoughts on this issue of external validity, and welcome your comments.

Photo: Norman G. Finkelstein

Share

The Logic of Violence in Counterinsurgency

I have alluded to the work of Jason Lyall on the use of indiscriminate violence in counterinsurgency in the past. Briefly, Lyall’s paper (recently published in JCR) examines how the Russian army used targeted and non-targeted shelling in Chechnya through a pseudo-natural experiment. The paper is fascinating, however, I always had two major issues with it; first, Lyall claims randomization and thus indiscriminate violence through the “harass and interdiction” pattern of shelling used by Russians. With even my limited exposure to US Army protocol, it is difficult to claim that this pattern is truly random. More importantly, though, is Lyall’s always struck me as an extremely useful empirical analysis in search of a theory.

A recent working paper entitled, “The Political Economy of Counterinsurgency Violence,” seeks to fill this void by offering a simple formalization of counterinsurgency strategy. In fact, the author ask an extremely important question in the opening paragraph:

Why are counterinsurgents often so brutal toward civilians if classical counterinsurgency theory is correct in suggesting that successful counterinsurgents must win—not destroy—the hearts and minds of the population?

To understand this dynamic the author models counterinsurgency as a game with three players. First, a coalition bwtween a rebel group and their popular support within a community, and second the counterinsurgent. To achieve its goals, the counterinsurgent seeks to divide this coalition through a mixture of violence and concession, which tempts some side in the coalition to defect on its partner for short-term gains and forgo long-term goals. Formally, the game is played as a public goods game, where each player has some level of “profit” it extracts from the insurgency, which is offset by the cost of participation. Thus the counterinsurgent seeks to short-circuit the profit chain through the threat or execution of violence.

What falls out of this model is an very interesting observation about how insurgency are a function of the active micro-economies where they take place. As the author states:

The rebels’ profit from insurgency increases due to windfall and black market revenue, external aid, natural resources, taxation, remittances, looted property and labor, and the availability and attractiveness of the rebels’ sanctuaries. An increase in the rebels’ accountability to the population and a decrease rebel profit results from restrictive geography, vulnerability to the population’s disloyalty caused by the nature of the rebel group’s organization, and the presence and strength of quasi-judicial institutions with which to sanction rebels’ abusive behavior…Factors negatively and positively affecting the actors’ relative profit during insurgency ought to correlate with the government’s use of indiscriminate violence.

The model is clever, and the author’s keen attention to the work of key counterinsurgency scholars comes through in his incorporation of critical elements of insurgency in the model. What is interesting, however, is how the model does not do a good job of predicting the kind of indiscriminate violence observed by Lyall in his research. The author here uses case studies from Guatemala and Turkey to support his theory, but given the high profile of Lyall’s work it would have been much more satisfactory to get a model that explained those observations. Of course, it is not the job of a modeler to fit data, and it may simply be the case that Lyall’s natural experiment is flawed, and this model requires better data for testing; either way, the article is very engaging and I highly recommend it.

Photo: New Internationalist

Share

URGENT: Stop Coburn Amendment to End NSF Program for Political Science

Senator Tom Coburn (R-OK) has proposed an amendment to the Senate Commerce, Justice, Science appropriations bill, which will end the National Science Foundation’s program for political science.

I have set up an online petition to rally opposition to this amendment, and I ask any and all readers to sign it, and pass it on to anyone and everyone. I also encourage everyone to call their senators and ask them to stop amendment 2631 to H.R. 2847.

https://www.petition2congress.com/2/2508/keep-nsf-political-science-program/

UPDATE: The petition has generated over 2,000 letters to Congress in less than 24 hours!

Share

Research and Data on September 11 Terrorist Attacks

It is an appropriately gloomy day here in Manhattan, as the city and the country remembers the horror of September 11th, 2001 and attempts to continue to collectively heal. For me, part of that healing process has been trying to understand what happened, and more importantly, how to prevent it from ever happening again. Over the past eight years many others have been moved to investigate and analyze these events, which has lead to a plethora of research on 9/11—some good, some not so good.

As someone who attempts to read everything that comes across my desk related to these attacks, I thought today an appropriate time to compile a short list of my favorite research and data on the terrorist attacks of September 11th.

Research

  • Leaderless Jihad: Terror Networks in the Twenty-First Century, Marc Sageman – Much of the initial academic and popular research related to the causes of terrorism in the aftermath of 9/11 focused on the colloquial wisdom that terrorist were poor, uneducated, and disaffected young men. Sageman was the first scholar to actually apply scientific rigor to the analysis of terrorist origins, and using his own data on the Hamburg cell the book continues to stand out as on of the best treatments of the formation and motivation of the 9/11 hijackers.
  • Responder Communication Networks in the World Trade Center Disaster: Implications for Modeling of Communication Within Emergency Settings.Journal of Mathematical Sociology, 31(2), 121-147. Carter T. Butts; Miruna Petrescu-Prahova; B. Remy Cross – This is one of the most unique and interesting studies on the events of 9/11. Butts and his co-authors use data from emergency responder radio communication to build a dynamic collaboration network. This is a great paper for those interested in time-space relations under heavy stress and uncertainty.
  • The Internet Under Crisis Conditions, Learning from September 11, National Academic Press – I was fortunate enough to have attended the release conference this research in Washington, DC. This remains the most comprehensive examination of global internet traffic, and network response in the aftermath of the loss of a major node at the World Trade Center.
  • An economic perspective on transnational terrorism, European Journal of Political Economy. Volume 20, Issue 2, June 2004, Pages 301-316, Todd Sandler and Walter Enders – Sandler and Enders are two leading scholars on the relationship among politics, economics and terrorism, and have written extensively on the topic. This article is one of the first to apply a game theoretic model to the economic of terrorism in the aftermath of 9/11.

Data

As always, I welcome any and all addendum to the list.

Share

How is the Afghanistan Election Model Different?

Today much of the world will be focused on Afghanistan, as that country embarks on its second attempt at a democratic election. With a constant threat of violence from the Taliban, the level of participation has been limited. Also, early reports state that police have been cracking down on journalists so information coming out of Afghanistan has been limited. There are, however, several good sources still operations, and I have put together a short list of useful links for following the election below:

Charli had a very interesting post this morning on possible outcomes of this election. She points to another FP post that asserts a worst case scenario for Afghanistan would be a result similar to that in Iran—a disputed election with accusations of fraud. After the Iranian election we had several lively debates at ZIA on ideas for modeling and predicting the outcome of the Iranian election. Also, NYU faculty member Bruce Bueno de Mesquita’s work on using game theory to analyze elections was thoroughly covered over the weekend the New York Times Magazine. Given the apparent power of game theoretic models to predict these processes the question is then: what is Afghanistan’s model?

The immediate and obvious difference between the two countries is the presence of the ISAF, most notable the U.S. military. Given how highly vested the United States is in the outcome of the Afghani election, any model would have to include this force as a key player. More interesting, perhaps, is the internal game being played among the political rivals. No matter the outcome, the declared winner will most certainly have to concede some level of authority to his rivals in order to maintain some semblance of unity among the heavily fractured groups within the country.

Finally, one aspect of this process that is often overlooked, which is a constant point of contention I have heard time and again from friends and former colleagues that have served in Afghanistan, is the underlying tribal dynamics embedded in Afghani political culture. As modelers, particularly those of us who are students of the selectorate model, we often think of elections as competition among an elite set of actors that are attempting to satisfy either a small or large selectorate groups in order to maintain power. The Afghani model, however, may be very different. In this country maintaining power requires balancing the needs of several intertwined tribal groups, with long histories, and subtle relations that span geographies, where their individual utilities for electing a given candidate may be inseparable. That is, one tribal group may wish gain or lose utility as a result of how their vote affects another tribe. As such, is there a way to reconcile the traditional models of power politics with the highly decentralized framework of Afghanistan political landscape?

I welcome your thoughts in the comments.

Photo: China Daily

Share

ABM in the Social Sciences

While I am in the throes of designing and implementing an agent-based modeling approach to study how democracies react to extreme external shocks, I wanted to take a brief break from coding and writing to highlight two very interesting pieces in the current issue of Nature that address ABM directly. The first, “Economics: Meltdown modelling,” discusses how advanced agent-based models might be able to help predict future economic crashes—complete with a vignette where a futuristic ABM prevents a collapse. The problem, as the article asserts, is that ABM is often rejected by mainstream economists.

Many [economists] argue that agent-based models haven’t had the same level of testing…agent-based model of a market with many diverse players and a rich structure may contain many variable parameters. So even if its output matches reality, it’s not always clear if this is because of careful tuning of those parameters, or because the model succeeds in capturing realistic system dynamics. That leads many economists and social scientists to wonder whether any such model can be trusted. But agent-based enthusiasts counter that conventional economic models also contain many tunable parameters and are therefore subject to the same criticism.

This aversion to ABM is persistent throughout the social sciences, which creates an odd dynamic where ABM enthusiasts must often spend a great deal of time justifying their use before research can even begin. What’s baffling about this situation, however, is that ABM is just a tool; useful in for some research questions, but ultimately an imperfect device—just as nearly all other research methods in the social sciences are imperfect. This is precisely the sentiment of the authors of the second article, an op-ed entitled, “The economy needs agent-based modelling.” In discussing the current state of the art in analytical economic models the authors note:

The best models they have are of two types, both with fatal flaws. Type one is econometric: empirical statistical models that are fitted to past data. These successfully forecast a few quarters ahead as long as things stay more or less the same, but fail in the face of great change. Type two goes by the name of ‘dynamic stochastic general equilibrium’. These models assume a perfect world, and by their very nature rule out crises of the type we are experiencing now…As a result, economic policy-makers are basing their decisions on common sense, and on anecdotal analogies to previous crises such as Japan’s ‘lost decade’ or the Great Depression. The leaders of the world are flying the economy by the seat of their pants.

Why then, is ABM treated as being particularly fallible? As a user and developer I have pondered this many times. I believe the primary issue for many critics is the notion of “creating a universe for experimentation,” i.e. the belief that an ABM can account for all of the complexity. The easy response to such a critique is simple: no one believes that. My first exposure to ABM were zero intelligence agents, and I was struck by how such simple models could predict the dynamics of real markets (so much so, that I thought I might name a blog after them someday). Quality ABM’s focus on a narrow set of agent attributes, and attempt to glean the maximum insight from these simple mechanics. For a more philosophical response I will paraphrase the great econometrician Neal Beck in saying that, “all of statistics is a sub-field of theology.” That is, with any model we assume to know the “real truth,” but accept the inherent error and still attempt to build knowledge from the analytsis. ABM are no different, however, these models simply leverage a different technology and analytical framework to produce conclusions.

I welcome both critics and supporters of ABM to make the case for and against their use. It should be noted, however, that those railing against new technology often become victims of their own shortsightedness.

Photo: Nature

Share

Interpreting Terrorists’ Strategies

I have mentioned the outstanding blog Cheap Talk several times in my Twitter feed, but have yet to promote it in a blog post. If you are not familiar with the blog, the authors present excellent daily commentary on current events from a formal and strategic perspective—I highly reccommend it.

Today, author Sandeep Baliga, Associate Professor of Managerial Economics at the Kellogg School of Management, offers examples of strategies to incite government repression from three terrorist organization: ETA (Spanish Basque separatist), ALN (leftist Brazilian rebels) and al-Qaeda. For example, here is Baliga’s entry for al-Qaeda:

Al Qaeda strategy:

Force America to abandon its war against Islam by proxy and force it to attack directly so that the noble ones among the masses….will see that their fear of deposing the regimes because America is their protector is misplaced and that when they depose the regimes, they are capable of opposing America if it interferes. Abu Bakr Naji, The Management of Savagery (p. 24)

There are two problems with presenting terrorist strategy in this way. First, in each of the three examples the stated strategy is an interpretation of the group’s strategy by a third party observer rather than a statement of purpose from the group itself. As such, these interpretation are prone to inherent the strategic assumption of that observer. In the case of al-Qaeda, their formation and strategic roots reach back to the jihad against the Soviet Union, while their focus on the U.S. did not start in earnest until the first Gulf War. The assertion that al-Qaeda’s strategy is to “Force America to abandon its war against Islam,” is clearly an interpretation of more recent signals from the group, and does account for the compounding of historic strategy into contemporary motives.

Next, grouping terrorist organizations like ETA and ALN with al-Qaeda is problematic given the formers’ micro-strategic focus and al-Qaeda’s macro-motives. Both ETA and ALN have local motives, and thus their tactics for political coercion reflect these goals. Al-Qaeda, on the other hand, is an amorphous international umbrella that acts more as an inspiration to local groups than as strategic hub. Recognizing the scope of a group’s area of interest and influence is a critical first step when attempting to examine a terrorist organization’s broader strategic focus.

The assumption that terrorists’ strategy is to incite government repression, however, is an interesting starting point for a model of coordination between a terrorist group selecting targets and government counter-terror efforts. Assume that the terrorists’ strategy is to successfully attack a target that will provide the maximum repressive response, while the state’s strategy is to minimize the number of civilian casualties. Given some matrix of targets and simultaneous allocation of resources, what are the equilibrium coordination strategies for each player?

Thinking abstractly, it seems that both players would allocate all, or most, of their resources to those targets likely to result in a mass causality event. The empirical evidence supports the framework given that mass causality events have prompted extreme restrictions on civil liberties all over the world. This, however, would result in an stalemate equilibrium, where no successful attacks take place because presumably terrorist targeting resources are met with equal counter-terrorism efforts. Could this be why we see so few successful terrorist attacks relative to failed ones? Would such a model show that that when attacks are successful it is because one side has obtained an informational advantage that causes the other to maintain an off-equilibrium allocation? I am interested in other’s thoughts on the value of such a model, and its potential consequences.

Photo: dailylife

Share

The Mathematics of War, Revisited

A few months back I wrote a post discussing Sean Gourley’s TED talk on the Mathematics of War; specifically, noting that his finding (a power-law distribution of attack frequency and severity in Iraq) was—well—old news. This set off an excellent discussion on Sean’s work, my comments, and more generally how the social and hard sciences can clash. More recently, Tom Ricks of The Best Defense blog revisited Sean’s talk with his own skepticism, which induced a response from Sean, and further skepticism by Ricks. In defense of his work, Sean responded to Tom’s post with the following:

With this new approach we can do several important things that were not possible before. We can understand the underlying structure of an insurgency i.e. how an insurgency ‘decides’ to distribute its forces (weapons, people, money etc). Further, we can explain why this kind of insurgent structure emerges in multiple different conflict zones around the world. We can estimate the number of autonomous insurgent groups operating within a theatre of war. We can monitor and track a conflict through time to see how either sides strategies are affecting the state of the war. Finally we can compare the mathematical patterns of current ongoing wars with past wars to estimate how close they are to ending.

I think Sean’s work in extremely important, as in many ways our research interests run parallel and this project has great potential. That said, his response leaves me with more questions than answers, therefore, with Sean’s response in hand I would like to revisit the mathematics of war.

First, I have serious doubts as to the connection between the distribution of attack frequency and severity and the underlying structure of an insurgency. Power-law distributions can provide a categorical approximation of a network’s underlying structure because in these cases the distribution in question refers to the frequency of edge counts among nodes, a structural measurement. Even for networks, however, the actual underlying structures of networks following a power-law can vary wildly. Attack frequencies, on the other hand, have nothing to do with structure. In what way, then, is this metric valid for measuring the structure or distribution of insurgent forces?

There is also a large element of context that is not captured in this analysis. To get to Sean’s question on why different types of insurgencies occur in different parts of the world, with varying lethality and effectiveness, one must account for the inherent variance in ability among insurgents and insurgent organizations. We know that people vary in their abilities to perform any task, which of course includes insurgency; therefore, we must control for any exogenous or endogenous factors that could contribute to this variance as to avoid inserting into our analysis the belief that all insurgent are created equally. Once a reasonable number of theoretically justifiable control variables are identified, we may be able to get at this question at both a micro (insurgent) and macro (insurgency) level. A present, the data used in Sean’s analysis accounts for this variation.

Next, there has been quite a bit of research on the duration of wars, including state-on-state, civil and insurgency. For this research, a critical hurdle has always been how to overcome bias in the data collection and reporting when attempting to approximate how various factor contribute to the curation of a conflict. Sean uses open-scource media accounts of attacks to develop his data, and because most of these media outlets are primarily motivated by profit it is difficult to view this data as unbiased. This problem, however, can be dealt with by various sampling techniques and control varaibles. Of greater concern are the eventual conclusions drawn by attempting to match conflict patterns in this manner. With Sean’s data, we might ask what factors contribute to ending conflicts following a power-law. Unfortunately, as previously discussed, all manner of conflicts follow this pattern. If two conflicts have a near identical power-law distribution when observed in the long term, but upon examination we find that one is an insurgency and other a state-on-state conflict, what insight have we gained? This categorical approach, therefore, may be significantly limited in its explanatory value.

Finally, I must point out that I have a very superficial perspective on Sean’s work, as I have only been exposed to the TED talk, and the discussions that have followed from it. There are likely many elements of this research that I am missing, and as such all of the above concerns may have already been addressed. I am interested in your take on Sean’s response, my position, and where you see the value in this research? To quote Tom, “Smart, statistically-comfortable readers: Do you see support for these claims?”

Photo: Chart of distribution of attacks with magnitude from “Variation of the Frequency of Fatal Quarrels with Magnitude,” by Lewis F. Richardson.

Share

Modeling Torture, and the Ethical Dilemma of the Results

A few months ago fellow NYU inhabitant Joshua Tucker of The Monkey Cage asked what, if any, social science research had been done on the effectiveness of torture in obtaining valuable intelligence? Josh’s primary question was an ethical one, that being if a researcher had a personal objection to the use of torture, but through an empirical analysis of data found that it in fact did extract valuable data, should the researcher attempt to get it published despite his or her personal objection? This touched off a very interesting discussion among Monkey readers, and I recommend it to all.

Today, Josh revisits the topic, but this time with a bit of relevant research in hand. In “Interrogational Torture: Or How Good Guys Get Bad Information with Ugly Methods,” John Schiemann presents a theoretical model of an interrogation.
Briefly, the model has two players, the detainee and the state, where the state is uncertain about the value of the detainee’s knowledge and the detainee is uncertain as to the state’s willingness to use torture. The state moves first, by either asking leading questions (uninformative signal) or objective questioning. The detainee must then decide to send a valuable message, or not. Finally, the state evaluates this message to ascertain the detainees type, and from this decides whether to use torture to extract additional information (for a full description of the game see the paper).

In approaching this research, Schiemann struggled with the exact ethical dilemma hypothesized by Josh in his first post, and after deciding to research the the topic he concluded:

…even in a worst case scenario in which torture is shown to be effective under some limited circumstances, we would want to know that. What is the alternative? The alternative is to do nothing and help preserve a status quo in which torture is unrestrained. As difficult as it would be to swallow a result showing some limited effectiveness of torture, I’d rather live with that than what the U.S. has been doing – and perhaps is continuing to do.

There are two interesting points of discussion that fall from this discussion. First, do we believe the model presented above is an accurate or useful interpretation of the decision process of a state to use torture? One weakness to note is the presumed equality of uncertainty between the players. The state is rarely completely uncertain as to the value of a detainee’s knowledge. Presumably some amount of intelligence collection went into the decision to capture and interrogate a detainee, therefore, the state can (and does) have the ability to rank the value of detainees. Likewise, unless a detainee is the first of a given conflict, the game is clearly repeated; consequently, all subsequent detainees will be able to update their beliefs about a state’s type. It may be more valuable, and easier to model, to make this a repeated game of one-way uncertainty, where a state is known to use torture, but the type of detainee is unknown by adding noise to the intelligence collected on a detainee prior to capture.

The second point of interest are Schiemann’s thoughts on how social scientists should approach researching ethically sensitive topics (for his full remarks see the Monkey post). My opinion is that all finding should be disclosed; first because it is a fundamental principle of scientific endeavor, but more to the point, it can expose false assumptions and promote more accurate models to be built and explored. For example, the model above is an excellent first step toward building a theory of how a state decides to use torture. As we can clearly see, however, it is in no way the definitive model on the topic. If the results from this model show that torture is effective that does not mean it should be used. On the contrary, it means that under the assumptions of this particular model, in some cases, it is shown to be effective. Improving the model, and generating new results, may alter the conclusion completely (or not). This iterative process is the only way to contribute valuable knowledge to a discipline.

I am interested in other’s thoughts, both in terms of the model, but also how to approach research on these kinds of topics. Particularly from practitioners (not necessarily of torture) within the defense community. How is this model getting at the dynamics of interrogation, and where does it fail? How might it be improved? Also, as consumers of social science research, how do you think the community should handle these ethical concerns?

Photo: Wikimedia

Share

A McNamara Syndrome?

Robert McNamara was complex giant in the field. Since his passing on Monday, several prominent IR scholars and practitioners have eulogized his life in a variety of ways. Most authors, however, note the disjointed nature of his legacy—great achievement in modernizing the Department of Defense, but also enormous failure in Vietnam. As someone often consumed by the power of numbers, for me McNamara’s most compelling accomplishment was his dogged persistence in applying the quantitative approaches to management that garnered him great success at Ford Motor Company to the Department of Defense. McNamara was also a (unconscious) believer in rational choice theory, at a time when these concepts were still the abstract vision of a small community of academics. If nothing else, McNamara’s evidence-based decision making was well ahead of its time.

In yesterday’s New York Times Errol Morris, director of the definitive McNamara documentary “The Fog of War,” wrote an excellent op-ed pondering how to remember the man. Morris’ closing remark refers directly his rational mentality, and its ultimate fallibility:

If he failed, it is because he tried to bring his idea of rationality to problems that were bigger and more deeply irrational than he or anyone else could rationally understand. For me, the most telling moment in my film about Mr. McNamara, “The Fog of War,” is when he says, “Perhaps rationality isn’t enough.” His career was built on rational solutions, but in the end he realized it all might be for naught.

This is quite provocative, but I reject the assertion that McNamara faced an irrational world. More likely, the ordered rationality he observed in Detroit was muddled In Washington by the layers of bureaucratic malaise and political absurdity. What Morris does not consider, however, is how the failure of McNamara’s methods permeated the defense establishment, and their consequences. That is, after being humiliated in Vietnam, and using McNamara as a prideful scapegoat, did the defense community develop an aversion to the quantification of warfare? A McNamara Syndrome?

The evidence seems to suggest that this may be the case. Of the prominent U.S. military leaders in the proceeding half-century, only Colin Powell approached McNamara in his desire understand the dynamics of conflict through evidence-based analysis (it should be noted that Gen. Powell was ultimately not rewarded for this approach). What, then, is the role for modern political science in a defense policy community plagued by this syndrome? While there is still an ongoing debate within the discipline as to the value of quantitative versus qualitative methods, the fact is most contemporary discourse in political science—especially in IR—is based on rational choice models, and explained through large-scale quantitative analysis. This creates an extremely problematic paradox: IR scholars reject baseless theory and attempt to explain conflict through simple rational theory and quantitative analysis; however, IR practitioners reject the value of these theories and methods, and attempt to manage conflict through institutional knowledge.

Perhaps McNamara’s most significant contribution is the institutional fear of methodology his failures instilled at the DoD; ultimately resulting in the much lamented gap between theory and and practice in international relations. As scholars, we must first ask ourselves if we care to overcome the McNamara Syndrome. If so, how can we reconcile our methods with the practice?

Share

© 2019 Duck of Minerva

Theme by Anders NorenUp ↑