Many of you seem to have read my earlier post knocking the use of assumptions in theory-building, particularly rationalism, and Phil Arena’s defense of it. My earlier post was a little over the top and insulting, which led him to take umbrage. I called him a dick; he called me a dick. We were both right, although I guess I started it. When we are thinking about who to ask in as guest contributors, my main criteria was theoretical and epistemological diversity. Then I pulled this. Now we are going to hug it out. Come here, buddy. Give me some sugar. Wait, wait…. No tongue. You are still just a guest contributor. Only over the shirt action.
But……..! I read Phil’s rebuttal and I still don’t get it. His position seems to rest on two points. First, that everyone uses assumptions in theory building, even in their daily lives. So that means rationalists are no different than others. And second that assumptions, even those that don’t reflect reality, are still useful in getting us somewhere.
Phil makes his first point through the example of driving a car. When we get on the freeway we are putting our lives at risk based on an assumption that others are not going to drive into us. Apparently Phil thinks that we cannot base that on evidence. Rather we are making an assumption beyond what the evidence allows that others will not cross the yellow lines.
Three thoughts: first, this does not seem like an assumption at all to me. I think I know that people do not want to die based on the years of data I have collected in my life — how they drive, how they go to the doctor to the bitter end, etc. Now it is possible that I encounter a suicide driver. But such an instance would (and does) attract so much attention because it is so at odds with what usual happens. That is we all know that people generally don’t want to die. That’s why we pay so much attention to people who do. It is so surprising. So there is a difference between a safe assumption, that we have a lot of reason to believe, and an unsafe one.
Second, there is no such parallel in theorizing about international relations. What do we know about what states want? Or what politicians want? Do we know that states want to survive? Yes. But there is no such easy answer for states as just driving on one side of the street. It is a much more complex problem and hence the use of assumptions is likely to be that much more detrimental to theory-building. These are unsafe assumptions.
Third, it strikes me as highly ironic and a little bit cynical to justify the use of assumptions in theory building through what looks like an ontological assumption that people are not expected utility-maximizers. It seems that what Phil is describing is a social practice or a cognitive heuristic. So rationalists can assume that individuals maximize expected utility because we know empirically that social life is predictable only because individuals don’t act in such a way?
I personally think, and maybe someone can show me that I am wrong, that I don’t make any untested assumptions in my own work. My template of late has been to look at how individuals have been shown to behave, actually, in empirical settings and then find parallels for international relations. I use the cooperation literature in social psychology to make the case that individuals behave differently in the same structural setting. This literature shows that assumptions of egoistic utility-maximizing does not hold up for a lot of people (but it does for some). This owes to different levels of trust. One might argue that by making such a leap to international relations I am relying on an assumption that the same dynamics apply. But I don’t. I generate a series of hypotheses, and measure trust the best way I can, to see if it actually does apply. My assumptions, if one could even call them that, are subject to testing.
Phil’s second point is that models are still useful even if they are based on unlikely, even false assumptions. Because models are like maps. It doesn’t matter if they are true. Here I think Phil is doing my work for me. Of course it makes sense to distinguish maps based on their veracity. A map that puts a river where there is a mountain range will get you drowned. The fact that maps should be as accurate as possible seems to be supported by the fact that cartography has always been about trying to improve on previous maps, to give us more information rather than less. The old maps where dragons were supposed to lurk past the European land mass were not very useful. Phil uses the analogy that a Garmin won’t show us elevation and therefore is not true. But this is because we don’t need to now the elevation provided our car can get up a hill. It simplifies reality but does not distort it. This is the key difference. I think that the assumptions made in a lot of rationalist models are distorting, not simplifying.
Another critique is that a simple map might get you some where, but not really tell you how. This is particularly relevant if a rationalist model based on assumption generates the same predictions as another theory that has a better sense of the causal mechanism by which that outcome is generated. So an old native American map that tells me to make a right at the rock that looks like a Buffalo is not as good as the Garmin even if I don’t get lost either way.
Rationalist models are indeed useful in a particular way, in getting us to think in new ways. And this is what I most regret not saying in my earlier post. For instance, Fearon’s 1995 piece has influenced my thinking enormously, not because I think it is right but because it pointed out the importance of uncertainty about preferences to me. When I was criticizing the utility of rationalist models it was in their empirical accuracy not their conceptual innovations. This would all be fine if these assumptions were subjected consistently to empirical analysis. But this is far to rare. Rather I see a certain bubble that develops in which the assumptions are taken as true. An article I like a lot has a sentence to the effect of, “Since Fearon 1995 we know that war is ex post irrational.” Really, we know that? Phil might just call this sloppy work. I’d agree. I also think it is common. And those who know me know that I don’t confine this critique to rationalist work. I see sloppy work everywhere and post about it.
There is a certain “have it both ways” quality to rationalism. They will go after work that does not assume strategically oriented behavior based on self-interest (particularly in reviews, based on personal experience), yet fall back on the “it’s just an assumption” defense when one tries to hold their feet to the same fire. I guess you can tell that part of my post was based on a personal frustration that might owe less to what rationalism should be as opposed to what it is.
But I think I got distracted from my initial purpose of talking about assumptions by everyone! For instance, I think we lost a decade of good international relations scholarship by assuming that states were unitary actors. And I think that most rationalists agree that that was wasted time because it was such an (obviously) false assumption.
” I think I know that people do not want to die based on the years of data I have collected in my life — how they drive, how they go to the doctor to the bitter end, etc. ”
Don’t want to get into this too much, but if people “do not want to die”–if they want to maximize years of life–how do we explain gluttony, tobacco, alcohol, risky sex, and all the other pleasures of life?
Some people do engage in behavior that’s harmful to themselves, of course. However, specifically with respect to tobacco, I believe it’s been shown that nicotine is addictive — it’s possible to stop smoking, but it’s not easy (cf. the nicotine patch) and some people who try don’t manage to stop. (I have never smoked and dislike being anywhere near cigarette smoke, but I grew up in a household with a parent who smoked so I have seen that behavior/addiction up close.)
Also, alcoholism (which I have not seen up close) is considered a disease — again, there’s a physiological dimension.
Yes, but that’s exactly the point that BR will have to assess–at what point does his assumption of life-maximization give way to other assumptions? We may think that the Becker rational-addiction model is crazy (it certainly seems to be so to me) but at least it addresses this fundamental puzzle: If life is desirable and more life is more desirable why do people engage in life-shortening activities? Possibly the answer is “people are rational and risky behaviors are net utility gains in expectation”; possibly the answer is “people are rational and risk is itself a source of utility”; possibly the answer is “engaging in risky behavior reshapes preferences”; and possibly the answer is “people are rational in the whole but irrational in some predictable ways.” But none of those are, I think, particularly “obvious” answers.
“Certain theoreticians have tried to find the equivalent of the rational goal of sport or economics for international relations. A single goal, victory, exclaims the naïve general, forgetting that military victory always affords satisfaction for amour-propre, but not always political benefits. A single imperative, national interest, solemnly proclaims the theoretician, hardly less naïve than the general, as if adding the adjective national to the concept of interest were enough to make it unequivocal. International politics is a struggle for power and security, declares another theoretician, Aas if there were never any contradiction between the two, as if collective persons, unlike individuals, were rationally obliged to prefer life to reasons for living.”
I’m trying to be as charitable in reading commenters as I would hope they are in reading my words, but I’m a little at a loss here. This is well and good and pretty much all theorists I know acknowledge the limits of their claim. But are we to throw aside simplification and formalization of our claims—even if they are “irrational” but still systematic, as in Tversky-Kahneman—because the world is messy? Formal theory, whether rationalist or not, is entirely compatible with optimizing multiple factors, even though the math gets fuzzy; often, my work postulates exactly that. But what I am confused here is whether the objection is to bad work or all such work.
PM – my comment was simply a quote from Raymond Aron, seeking to problematise the notion that states (or individuals) always prefer “life to reasons for living”.
OK. I haven’t read Aron so even though the Google turned up the passage it didn’t click.
#Personal communication: PM: I really enjoy your posts and comments. Keep them up
thanks Halvard!
Very briefly, it seems to me there is at least one other possibility: some people may be genetically predisposed, given certain environments, to engage in ‘life-shortening activities’. (In fact there was something on the NewsHour just last night about genes and alcoholism. Haven’t had a chance to watch it yet, but it’s on their website.)
Brian,
You claim that you use the cooperation literature in social psychology to find parallels in IR. The empirical issues in social psychology notwithstanding (what with the blatant data forgery that prompted even Kahneman to issue a statement urging the field to consider the importance of data openness and replicability), you are _assuming_ that what actions these experimental subjects (often college students) take is applicable to the same decisions a leader might make. Call it what you want (hypothesis-testing and what not) but you’ve made an assumption, clear and simple.
Now on to the measurement of trust. Is it not true that once you operationalize a variable as abstract as trust – which means different things to different people, let alone cultures – you are assuming that whatever proxies you choose actually tap at some aspect of trust? If not, what are you doing?
Are empirically-testable assumptions not assumptions – what are they then? Perhaps you should define what an assumption is, which is probably what you should have done from the outset. Exactly what assumptions are you referring to in “rationalism” that you find so unpalatable? And if you think that the assumption that individuals have preferences over outcomes that don’t cycle cannot be tested, you’re flat-out wrong.
Perhaps your experience with some reviewers has left a foul taste in your mouth, and I’m not sure what your intentions are with these posts, but as I’ve said in my comment on your earlier post, you’re assuming a whole lot about an entire body of work. As you’ve stated, there is good work and sloppy work. Your tone seems to suggest that all “rationalist” work is sloppy based on a few papers you’ve read.
Indeed, the unitary action assumption in IR was a huge waste of time but who made that assumption? Certainly not the “rationalists”.
“The fact that maps should be as accurate as possible” — this isn’t true. An interrupted sinusoidal or Goode projection might be argued to be the most accurate, but it would be pretty much useless for navigation. And AFAIK it’s the distortions and falsehoods in the good old Mercator projection that make it useful navigationally. Likewise, the most useful transit map are typically the least accurate, following the example of the London Underground map.
Said it in another thread, but the London tube map isn’t inaccurate. It is, in fact, a very accurate representation of what it is attempting to map. The line/stops/relationships.
Which is not true of “rationalist” work because…you say so?
Can you explain to me why it would be wrong to summarize your position as “simplification is wrong except when it isn’t”? I’m sure that isn’t actually your position, but I haven’t the foggiest idea *how* what I just said differs from your actual position.
My people are touchy on this subject aren’t they? Nothing in this reply here that I say says anything about rationalist work, let alone that it’s wrong because I say so. It was a simple reply to Jim’s post about the London tube map. I do have a position on the rationalist assumptions issues, but I’m not articulating it here. For what’s t’s worth I’m against the ‘it’s ok if it’s useful position’ (instrumentalism technically), because it’s ok as far as it goes, but it’s not an accurate description, either of what’s going on, or what good science does. The questions science always asks, ‘why is it useful’? Very little ‘science’ ever rests content, with ‘it works that’s good enough’; that might be the case in technology/engineering fields, or even for particular moments in the history of a given science, but eventually somebody will want to know why it works.
I apologize if that came across as touchy. I was a bit surprised at the tone you yourself have adopted in your comment on the other post, but let’s chalk that up to mutual misunderstanding.
When you say that the London tube map isn’t inaccurate because it portrays “what it is attempting to map”, it becomes a bit harder for me to understand why you’ve said the things you’ve said about rationalism (which I understand you did not say in your comment on this particular post). I fail to see what a theoretical model that did not aim to faithfully capture all of reality, but only some portion thereof, would be considered a “spectacular failure” by someone who says that maps are not inaccurate so long as they achieve what they aim to achieve. This looks to me like you are embracing instrumentalism, though you explicitly claim to reject it. So color me confused. You could argue, as you did in your comments on the other post, that most social science fails to reach the level of usefulness that most maps do. That’s fine. I conceded exactly that yesterday. But I don’t know how to reconcile that with the claim that “useful” is not a valid standard.
No problem Phil, it’s the medium. Useful is not a valid standard in and of itself for science. I’ve accepted in print that at certain points in the development of any science instrumentalism might be the best option we have, but eventually the realism principle will kick in. So, it can be part of science, but ultimately in any science somebody will ask, ‘but why is it useful’. No instrumentalism is basically the Friedman position that theories doesn’t have to be realistic, in fact they shouldn’t be, we can come up with any assumption we want as long as it’s useful. My position is that things like the London tube map are useful precisely because they are realistic in terms of what they are attempting to map; and what other kind of realism do people think they don’t provide. To give another example. A new strain of flu emerges in India and scientists start working on different assumptions about it. One assumption seems to be useful in controlling the disease, but the scientists will want to find out what it is about the assumption that makes it work. So they’ll start unpacking the assumptions and testing them, they won’t just rest on the utility of the assumptions. They’ll want to identify exactly what is new about this virus and how accurately the assumptions model it. So Pol Sci, for me it’s a spectacular failure (not only because of rationalism btw, which I have less trouble with than some people) because it’s not even useful, and to me it’s not useful because it rests on the utility principle and doesn’t go beyond it, and can’t because it’s not really interested in testing how realistic the assumptions are. All theory is abstraction, but that tells us nothing unless we start asking which abstraction are better than others. Usefulness is one category, but no science can or should rest content with that as it’s defining characteristic.
sorry, its (not it’s – stop auto correcting me Apple).
:)
I understand your position better now. Thanks for clarifying.
FWIW, I think it’s perfectly reasonable to say that science must ultimately strive for realism, at least to the degree that the natural sciences do. I think we’re in agreement that the social sciences are unlikely to ever meet that standard, even if we disagree about how damning that is. I would, however, respectfully take issue with the claim that rationalism fails even by instrumental standards, for reasons I laid out in my comment on the other post.
I don’t know your work – we’re involved in very different research areas, but I can still be certain that you make untested assumptions. The act of doing science requires that we make certain assumptions. At the very least, you must assume that we can perceive at least a distorted version of reality and that it is possible to make law-like, causal statements about the world. These are non-trivial assumptions that occupied some of history’s most brilliant minds for years (Descartes, Hume), but a failure to grant them held back many fields from developing any knowledge for years.
In addition, generalizations are ABSOLUTELY required for any theory building exercise. Without them, the best we can hope to accomplish is accurate description of the world and political science has no reason to exist. People who do formal theory tend to do their work by selecting a particular set of generalizations, calling them assumptions, and then seeing what comes out of this process. In general, this is the result of a decision to focus on strategic interaction as the interesting theoretical problem. In order to focus analytically on this process, it is necessary to hold certain other things constant.
Progress in rational choice theory, however, very much takes the form of discarding or modifying assumptions seen as problematic. To follow on your example of Fearon 1995 – look at the theoretical models that followed Fearon – the goal of most of them was to relax and change some of his assumptions – for example, by allowing a richer interaction involving costly signals or allowing further bargaining within war, etc. Certainly, the people involved in this work care about making better assumptions when this is possible, and many simplifying assumptions have been abandoned as empirical work shows that they do not hold.
I get the feeling that your real beef is not with substantive simplifying assumptions like “war is a costly lottery over exogenously fixed outcomes” but more with expected utility maximization, but this too has been relaxed. Rational choice theory is a rich tradition, and we have developed many concepts (e.g., QRE) that let us see what happens when actors deviate from the rationality postulate.
I don’t think that anyone believes assumptions are sacred, but for those of us that do formal work it’s very easy to get exasperated with people who latch on to assumptions that they don’t like and criticize us for their lack of realism without engaging with the analytical substance of the work or showing why a different set of assumptions would lead to different results. For example, I recently gave a talk where the analytically interesting results arose from uncertainty about preferences. My model has the interesting feature that these results arise with minimal (i.e., any e > 0) uncertainty, so I presented the results assuming only absolutely minimal uncertainty. At the end of the talk, I received 5 downright aggressive questions in a row where the questioner complained that I had assumed far too little uncertainty. Try as I might to explain that more uncertainty gave EXACTLY THE SAME ANSWER but was, in my opinion, a stronger assumption, the audience was totally lost. They missed the forest for the trees because dissecting the descriptive realism of the assumptions blinded them to the contribution (some of this may be on me for choosing to present things the way I did).
This is why many of us in the rational choice community are sick and tired of hearing that are assumptions are inaccurate. We know they are inaccurate. We care about this so far as using more accurate assumptions gives us different answers. Often relaxing assumptions to make them more accurate makes things more complicated but doesn’t tell a different story, so in my view the onus is on critics to show how relaxing a given assumption leads to new conclusions. The tools are out there to alter nearly any assumptions you want whether by building your own model with different substantive assumptions about utility functions or strategic structure or by relaxing rationality requirements with QRE, level-k, cursed equilibrium, or other solution concepts from behavioral economics. If you get new, interesting answers to questions about politics, then more power to you, but don’t jump down game theorists’ throats for making assumptions that aren’t “true”.