Editor’s note: this post first appeared on my personal blog.
This activity comes after students are to have listened to a lecture (slides) about domestic politics helps us understand variation in the likelihood of international conflict. I focused particularly on whether the spread of democracy explains Europe’s transformation from one of the most violent parts of the world to one of the most peaceful and how the the fear of coups and rebellion in Sub-Saharan Africa helps explain why there have been so few interstate wars there.
I closed out the portion focusing on the democratic peace by discussing how territorial disagreements both promote war and inhibit democracy, thereby creating a spurious correlation between joint democracy and peace (see this recent post of mine).
To help the students see how the nature of the threat a state faces might be related to the constraints placed on the government, which is a key part of that argument, I asked them to play the part of an opposition party that was being asked to allow the government to delay elections in each of two related scenarios.
The correct answer, as a relatively slim majority (?!) of them guessed, was the second scenario. I had expected this to be more straightforward. Among those who offered an explanation for their answer (which many did, though none was required), the correct answer was a clear favorite, and the reasoning was generally well in line with my expectations. But for some reason, a substantial minority still went with the first scenario, for reasons that are not clear to me. Perhaps they were just guessing at random?
At any rate, my hope was that by getting them to more or less reveal through their own behavior a core part of an argument that implies that the correlation between joint democracy and peace is spurious, I might have convinced some of them that there’s something to the alternative argument. I don’t how many of them connected those dots, and I’m fairly sure that no matter how many times I talk about correlation not necessarily implying causation, a good chunk of them will continue to draw invalid inferences when the conclusions fit their priors, but I think some of them got something out of this activity.
Editor’s note: this is a guest post by Anna O. Pechenkina, Post-Doctoral Fellow, Dept of Social and Decision Sciences, Carnegie Mellon University. It is primarily a response to an essay Branislav Slantchev recently posted on his personal website.
Branislav Slantchev advocates for NATO troops to be stationed in Eastern Ukraine to preserve western strategic interests in the region. The logic is that if NATO troops establish a “tripwire” in Ukraine, Russia will face a choice of whether to attack western troops and will (most likely) back down. While this describes the world in which I personally would like to live (full disclosure: having grown up in the region, I hope for its western integration), I suggest the West would not want to risk a war with Russia to preserve Ukraine’s current territory because in the long run, secession of Ukraine’s Eastern provinces will be damaging to Russian, not Western, interests.
This activity comes after students are to have listened to a lecture (slides) on international institutions, specifically the impact they have on patterns of armed conflict. The first half focused on peacekeeping, which works better (under some conditions) than many appreciate, while the latter focused on how international institutions can deter bad behavior even if they lack enforcement power. The argument, which I previously laid out (in a somewhat different form) here, is that international institutions need not have the power to punish so long as the statements they make have an impact on the likelihood that someone else will do so.
If institutions issue reports condemning bad behavior, which they don’t always catch, but never go out of their way to praise good behavior, then one might think that they only influence beliefs when they issue reports. But that’s not correct, at least if governments are even weakly Bayesian. Every time a report is not issued about a given state violating international law or otherwise misbehaving, a little more information is revealed.
Editor’s note: a more detailed version of this post previously appeared on my personal blog.
If sanctions are to succeed as a tool of coercive diplomacy, they must impose real costs on the target. Yet, in most cases, they fail to do this—at least, directly. The economic costs tend to fall disproportionately on the average person, while the regime and its elite supports often find ways to benefit from newly emergent black markets. But might sanctions put pressure on the regime through some other channel? Say, by increasing protests?
There have been many attempts at answer this question, all of which have been plagued by serious measurement issues. The recent release of new data both on sanctions and protests allows for a more convincing analysis, which Julia Grauvogel, Amanda Licht, and Christian von Soest provide in this paper.
One big problem any study of the impact of sanctions must deal with is that of strategic interaction. When an episode ends at the threat state, we don’t get to observe what would have happened if sanctions had been imposed. So if we don’t look at what effect threats themselves have, we’re not getting the full story.
GLvS thus look separately at the impact of new threats and new impositions on protest activity. They also allow for the possibility that certain types of threats (impositions) might have a bigger effect. Under the assumption that the primary channel through which sanctions increase protests is through signaling that the international community shares (some of) the goals of the protesters, they check to see if it matters whether the sanctioners specifically targeted the human rights practices of the target regime, whether the sanctions are narrowly targeted at the regime and its supporters, and whether they are multilateral in nature.
Somewhat surprisingly, the authors find that none of that seems to matter. We of course need to be careful, because the absence of evidence is not the same as evidence of absence, but it appears that context isn’t too important. However, they do find, as expected, that threats are associated with an increase in protests, whereas the actual imposition of sanctions is not.
This activity comes after students are to have listened to a lecture (slides) on information problems as an explanation for war—which I’d say is the most useful explanation we’ve got. The broad contours of the argument are pretty straightforward, but the full implications are not. (That’s something of an understatement. As I’ve discussed a few times before, a lot of very smart people have made incorrect statements about what this argument implies. In fact, while I’ll gladly admit we’ve hit the point of diminishing marginal returns, I still think there’s a lot we’ve yet to learn from this way of thinking.)
This activity was designed to illustrate both the general point that war can occur as a result of states taking (optimal) gambles and also to demonstrate two less intuitive implications of the argument: states who expect to do poorly in war are not necessarily any less likely to risk war; and states who are fairly sure their fait accompli will provoke military resistance may still execute them, even if said resistance would cause them ex post regret.
The scenario described by the two parts of the activity is identical to that depicted by the game-theoretic model discussed in the lecture. But whereas the lecture walks through a general solution to the model, identifying optimal strategies under all possible conditions, and thus involves a fair amount of algebra, the activity assigns concrete numbers to everything and so simplifies things greatly. In fact, while I allowed the students to consult their notes and/or the slides during the activity, as a handful did, the optimal strategies here are sufficiently straightforward that most students correctly identified them without doing so (and probably, in many cases, without having listened to the lecture before class.)
Of course, it helps that this time around, I didn’t throw things wide open the way I did with a previous activity. The only two options that make any sense (the largest land grab the blue type would be willing to live with, which would provoke a war if D happens to be red but bring a better payoff if they’re blue; and the largest land grab that the red type would tolerate, which ensures peace but not necessarily on the best possible terms) are identified for them. All they have to do is figure out which makes more sense.
As many correctly determined, it makes sense to gamble here. That’s not terribly interesting, in and of itself. But what one might overlook is that they just proved to themselves, through their own behavior, that it can make sense to risk war even if you know that, should the war that you genuinely hope to avoid occur, the outcome would be pretty terrible. In this case, grabbing 50% of the territory meant accepting a 30% chance of provoking a war that would leave you feeling as though you’d gained nothing at all (acquiring 10% of the territory but incurring costs that completely offset that gain). And yet most of them went for it. As they should have.
The second part is nearly identical to the first. However, here, any potential war would go pretty well for the challenger, even against the red type. The costs of war have even come down a bit. So it’s not surprising that even more of them decided to gamble here, taking 90% of the territory. Again, that’s not what I what I was trying to show. What I wanted them to focus on was that the vast majority of them went for the larger land grab despite the fact that they knew (or, at least, should have known) there was a 65% chance that they wouldn’t get away with it. As I hope was clear to at least some of them, taking 60% of the territory, which they’d have been sure to get away with, would certainly leave them better off than fighting a war against the red type would, since that would leave them feeling as though they’d acquired 50%. So even though the war they risked provoking wouldn’t prove disastrous, its occurrence would still entail ex post regret. It would be a war no one wanted. As most wars are.
As ever, I’m not sure everyone got what I was trying to convey. But I think most of them did. Hopefully, most of them will now be less inclined to confidently assert (the way so many do) that it makes no sense to involve yourself in a war you don’t expect to win, or to see a war that no one really wanted and conclude that some or all of the personalities involved must have been deficient in some way. As I discussed in the lecture, it’s entirely possible that those factors were at work in many historical cases. But we don’t know that. We can’t know that, at least not with the sort of confidence many exhibit. Because despite what many seem to believe, such outcomes can arise as the result of optimal decision-making.
This activity comes after students are to have listened to a lecture (slides) on commitment problems. The lecture focused in particular on how the anticipation of future shifts in power can create incentives for preventive war. After walking them through a formal model fleshing out the argument, I then discussed the role of preventive motives in the US Civil War and showed them that interstate wars have occurred more often historically when there was reason to believe that war in the current year would have a significant impact on the distribution of military capabilities in the subsequent year. This activity applies that same argument to a slightly different setting: the problem of rebel demobilization, which Walter has called “the critical barrier to civil war settlement.”
This activity comes after students are to have listened to a lecture (slides) introducing the second big puzzle of the course: why states sometimes burn what they want in order to get more of it—that is, why wars occur despite the inefficiency their costly nature implies.
Over the course of the next few lectures, I’ll be taking them through the main arguments of Fearon 1995 (see also this blog post) step by step. But before we turn to the explanations for war, we first need to understand the inefficiency argument so as to fully appreciate why common explanations fail.
Today’s activity was simpler than many of the previous ones, consisting of a single decision that should have been pretty straightforward for those who actually understood the lecture (of which, it appears, there were only so many).
The correct answer is 75%, as a handful (but, sadly, only a handful) of students determined. The reason is that D has no incentive to start a war unless doing so leaves them feeling as though they held onto more territory than if they allowed C to get away with their fait accompli, and since D will feel as though they have held onto a mere 25% of the territory if they resist (despite retaining control of 40%), C can safely take 75% without provoking a war. Should C take less than this amount, C will avoid a war, but will fail to achieve the best outcome possible. To take less than 75% of the territory in this case would be the equivalent of insisting on paying more than amount due at the grocery store. Sure, you can. I think. I mean, I’m certainly not aware of any law against that. But why would you? (Of course, there’s a very important moral difference between countries taking less land than they could have gotten away with and individual consumers overpaying at the grocery store, but they weren’t asked what % of territory C could morally justify taking, nor do I think they understood it as such. If they had, I’d have expected a lot more answers of “0%” or “none” and a lot fewer of “100%” or “everything.”) On the other hand, taking more than 75% provokes a war, which leaves C feeling as they’ve only acquired 50% of the territory. So that can’t be optimal.
The most common answer was 100%. Every number on the slide made a pretty strong showing, particularly 60% and 50%, leaving me with the impression that a good number of students were guessing randomly. Some did, however, offer explanations for their answers that indicated they were thinking along the right lines but just didn’t quite connect all the dots. At any rate, after I explained the optimal strategy, they seemed to get it right away, which was encouraging.
In that lecture, I presented the epiphenomenal critique, which I think establishes an important baseline, then went on to discuss specific mechanisms by which institutions might solve coordination problems, collaboration problems, and problems of trust/fear of exploitation (the three primary explanations I offered for why states sometimes leave money lying on the ground). The first half of the activity sought to illustrate the epiphenomenal critique more clearly, while the second tests their understanding of the theoretical model I used to demonstrate the even if institutions merely serve as screening mechanisms, they may still facilitate cooperation that would not otherwise occur (as I’ve discussed here before).
The decisions they faced in the first part were pretty simple.
My expectation, which proved accurate, was that most of those who chose to cooperate would sign the treaty while most of those who did not would not. A significant number of non-cooperators signed the treaty, but fewer than did not, and only one cooperator chose not to sign the treaty. Thus, their behavior produced a fairly strong, though not perfect, correlation between cooperative behavior support for an international treaty that, by design, absolutely did not matter at all.
This didn’t seem to impress them much. I’m not sure if that’s because the epiphenomenal critique is so intuitive that they already understood it clearly or if this exercise failed to clarify things or what. I guess I’ll find out how well they understand this point after they submit their answers to the midterm.
The second half was more challenging.
Here, the students face the same set of decisions as player 1 in the Model of Reassurance I discussed in the lecture they were to have listened to before class. In that model, there is an equilibrium where the blue type of player 1 (which is the type they’ve been assigned here) proposes an agreement, then cooperates with 2 if and only if 2 accepts the agreement, while 2 accepts the agreement and then cooperates if blue, but rejects the agreement and does not cooperate if red. Two important conditions for that equilibrium are: the red type finds the cost of the agreement too high for it to be worth mimicking the behavior of the blue type in hopes of tricking player 1 into trusting them; yet the blue type finds that cost acceptable. Note that these conditions are met here. So a student who is really on top of the material would notice that if they propose agreements to all five countries then cooperate only with those that accept would have no reason to fear exploitation and could expect to earn more points (30 if just one of the five other countries turned out to be blue, which was precisely what happened when I determined their types randomly) than a student who adopted any other strategy.
Unfortunately, very few students selected the optimal strategy.
What most did, strangely enough, was to propose no agreements and cooperate with no one. Now, I understand the latter part. If you failed to understand how institutions can eliminate trust problems, particularly under conditions that just so happen to be met here, then it would make sense to play it safe. In fact, I set things up to ensure that cooperation would not be appealing if the trust problem couldn’t be solved. But what I don’t quite get is why so few of the non-cooperative students proposed agreements. I guess they were afraid that the other side would accept, thereby costing them five points, but weren’t willing to trust that anyone who accepted the agreement would cooperate. Or, more likely, they didn’t understand that they could condition their answer to the second part on whether the other side accepted their agreement. And that’s undoubtedly because I made them submit their answers before I assigned types to the five countries (which was key to determining whether they’d accept agreements, and which I wanted them to see me do so they didn’t think the outcome was rigged), but I instructed them to tell me separately what they would do in response to acceptance as well as what they would do in response to rejection. Sadly, virtually no one did so. I thought I was clear about that, but obviously I wasn’t clear enough. Next time, I’ll have to really drive home the fact that they not only can but should tell me what they would after each possible response by the other side to their proposal.
So, I guess I’ll be seeing a lot of them again on Monday, when they get their usual chance to redo the activity.
On the upside, when I explained why the optimal strategy was what it was, most of them seemed to understand. I would have liked to see some of them figure out on their own, as that would indicate that they had a good grasp of the key point of the lecture the activity was paired with, but at least the activity served its purpose.
Editor’s note: this post previously appeared on my personal blog.
This fourth activity comes after students are to have listened (slides) to a lecture on how states are currently leaving a lot of money lying on the ground by failing to cooperate more fully. The examples I used all concern economic cooperation—specifically, how there’d be a whole lot more stuff to go around if states changed their trade, exchange rate, and immigration policies—though I discuss other areas where states fail to reap all the available benefits of cooperation in other lectures. Look below the fold for details.
First, I asked them to fully characterize the optimal strategy for player 1 in the following modified centipede game, assuming player 2 adopts their optimal strategy.
That second part is important, and I stressed it quite a bit. I particularly made sure it was clear that I wasn’t asking them to imagine that they were playing this game with a friend. I did this because I knew that some students would argue that their answers made more sense than the one I was looking for—which, in fairness, might well be true if we didn’t know how player 2 would behave. As others have noted, if player 1 believes player 2 will not behave optimally, it can in fact be optimal for player 1 to adopt a strategy that could not comprise a Nash equilibrium. That’s an interesting possibility, and in a different context, I’d consider it worth exploring. But my goal here was to figure out who’s thinking strategically, as they should be if they listened to the lecture on game theory, and who’s trying to get by on gut intuition. As I’ve said before, some of the activities I have planned are strictly meant to help students understand things better and my expectation is that everyone will receive all the points available for that day. But when I decided to flip the classroom, I knew it would be important to give the students a strong incentive not to wait until the night before the midterm to start listening to the lectures. The only way to do that is to make sure some of the activities are much more challenging to those who didn’t listen than those who did. This is one such activity.
It came as no surprise, then, that many students failed to identify the optimal strategy for player 1, which is take; take. (Actually, not a single student successfully characterized any full strategy. This is a bit disappointing, since I spent some time in the lecture pointing out that a strategy specifies what a player would do at every stage, regardless of whether they are called upon to do so in equilibrium, but I can’t say it’s especially surprising.) Some thought the game would end at the final node (which presumably means they think 1’s strategy is pass; pass) because that makes everyone better off. Others applied just enough strategic thinking to say that player 1 would take at the second opportunity, but evidently didn’t consider that 2 should anticipate this and so would not allow 1 the opportunity to do so, which in turn means that 1 has every incentive to take at the very beginning. But a decent number did correctly determine that player 1 would take right away, and I gave full credit to these students even though they didn’t specify what player 1 would do if called upon to make that second decision.
The second part, a version of the traveler’s dilemma, was harder.
Here, the correct answer is $2. And though some students figured that out (or guessed fortuitously), most did not. The most common answer was $99, though there were some more creative ones, like $51, and my absolute favorite, B. (Your guess is as good as mine.) I think the problem here was that students were treating player 2 as non-strategic. If player 2 wrote down $100, then player 1’s best response is indeed $99. But player 2 can anticipate this, and 2’s best response to 99 is 98. Of course, 1’s best response to 98 is 97. And so on. That logic drives both players down to $2.
Hopefully, this exercise convinced students that even narrowly self-interested actors are best served by trying to see things from the other person’s perspective. If nothing else, though, I think I might finally be convincing some of the slackers that this is not the sort of course one receives an A in without trying.
For those wondering, this completes the portion of the course that is intended to help students interpret the analysis I’ll be presenting in future lectures. The rest of the course is devoted to the study of cooperation and conflict, and the next activity will be designed to convince them that there are indeed lots of benefits to cooperation that are currently going unrealized.
Editor’s note: this post originally appeared on my personal blog.
As I mentioned in a previous post, I’ve decided to try “flipping the classroom” this semester, meaning I’m posting the lectures online and using class time mostly for activities that reinforce core concepts and create incentives for students to keep up with the lectures from week to week. Look below the fold for a description of the second activity, which concerns the interpretation of regression results.
Editor’s note: this is a slightly modified version of a post that originally appeared on my personal blog.
As I mentioned here, I’ve decided to try “flipping the classroom” this semester, meaning I’m now posting the lectures online and using the class time this frees up for Q&A and for activities meant to reinforce core concepts and create strong incentives for students to keep up with the lectures from week to week. These activities will take a variety of forms, and I’ll post about each one in case anyone out there is interested.
Look below the fold for a description of the first activity.
This activity is meant to reinforce the role of simplification. It is less a test of their understanding of the material (as most future activities will be) than my attempt to convince my students that fairly significant distortions often do very little to diminish the recognizability of what is being modeled. Though some of the other activities I have planned will use up a full class period (in my case, that’s 50 minutes), this one is very brief and only takes a few minutes.
(As such, fewer points are at stake than will be true of other activities. By the end of the semester, across all 15 activities, students will have had a chance to earn 300 points. The class activities comprise 30% of their grade, so every point is worth 0.1% of their overall grade in the course. This activity only offers 5 points.)
First, I showed them the following pictures and told them that if they could identify three of the four flags, they’d earn two points.
Of course, it helps that I chose very recognizable flags. A tricolor in black and white would be much harder to identify. But I’m not sure how many of my students could identify the flags for many other countries even if they weren’t distorted.
I then showed them these pictures and told them that if they can identify three of the four celebrities, they’d earn three points.
I didn’t expect this to be a whole lot more difficult, and for most students it wasn’t. However, I realized after the fact that I should have made the point a different way, for reasons I discuss below.
The point I meant to illustrate, which I think came through for most students, is that none of us believes we live in a world of black and white, nor one in which people lack eyes. Yet those features are, for some purposes, inessential. Similarly, if I present a theoretical model in which two unitary states must decide whether to cooperate with one another, one could easily point out that states are not unitary actors, or that there are more than two states in the system. Both of those things are absolutely true, and for some purposes, cannot be ignored. But depending on the question we’re asking, assuming away domestic politics and systemic effects is no more distorting than throwing black bars over people’s in photographs or removing the color from national flags. My goal is to get my students to go from asking “does the real world look like this?” to asking “does it matter that the real world unquestionably doesn’t look exactly like this?”
As I acknowledged above, though, the pictures of celebrities didn’t work as well as I’d intended. I asked my students if anyone who had a hard time identifying any of the celebrities thought that they wouldn’t have if they could have seen the eyes. A few said yes, specifically Jennifer Lawrence. If that was the only issue, I’d swap out the picture of her for one of someone who is more recognizable or whose most recognizable feature is not their eyes. But the more important concern, which I really should have anticipated, is that international students had a much harder time with the second half of the activity than the other students. That was insensitive on my part, so I ended up giving all the students who submitted responses full credit. I think the activity worked well over all, and so will do something similar again in the future, but I won’t use pictures of celebrities. If anyone has any suggestions for something that would be similar in spirit but more likely to transcend culture, I’d love to hear about in the comments.