In his most recent post, PTJ argues that “things like Freakonomics are basically corrosive and should be opposed whenever practicable”.  While he repeats in that post (and the comments section) a number of dubious claims about what sorts of behavior are possible within a decision-theoretic framework, I think we’re past the point in the conversation where it is useful to argue about the possibility of writing down a decision-theoretic model whose actors are capable of moral behavior and belonging to communities.1 In this post, I’d like to discuss the moral argument PTJ makes against decision-theoretic work.
Let’s set aside debate about what sort of behavior decision-theoretic accounts might potentially allow for. When someone claims that something I know to be possible is impossible, I consider it appropriate to object, but it’s clear that what concerns PTJ most is not whether it is possible to model certain things, but the impact “decision-theoretic” work is actually having on us as human beings.
Some might find it distasteful to suggest that we refrain from conducting research for any reason other than a demonstrated lack of scientific merit to such research.  For example, Tdaxp calls PTJ’s critique of rational choice anti-scientific.  But that’s unfair.  When faced with a fundamental tradeoff between two values, the person who choose the course of action that best promotes value 1 need not do so out of any particular opposition to value 2.  Nothing PTJ said implies that he does not value scientific inquiry–only that he is willing to place limitations on it when its conduct impinges on certain moral values.  But so are we all, to some degree.  Unless you favor removing all restrictions on human and animal research, you–like me–are in the same broad camp as PTJ.
The question, at least in my mind, isn’t whether it’s ever appropriate to limit scientific inquiry for ethical reasons—it’s how corrosive the conduct and dissemination of “decision-theoretic” research really is. Â Well, that, and how much insight we might gain if we were willing to accept that cost.2
Let’s go back to Freakonomics. In the first chapter, Levitt and Dubner discuss an informal experiment conducted by Paul Feldman, who delivered bagels to office buildings and left a box nearby for voluntary payment. Â After finding that empty boxes got raided, as did cans with plastic lids, he decided his “honor system” needed a more secure way of storing cash. Â After moving to secure wooden boxes, he found that about 87% of people pay for their bagels, with some slight variation over time and across workplaces. Â Levitt and Dubner suggest that this tells us the answer to the question raised by “The Ring of Gyges” — most people, it seems, do not act immorally even if they have every reason to believe they can get away with it.
But of course, we don’t actually know how many could-be bagel thieves would have gotten away with it.  That Paul Feldman was not there to demand payment before handing over the bagel doesn’t exactly mean we’re talking about the Ring of Gyges. The question of how differently people behave when they’re not being observed has spawned a cottage industry — and some recent papers paint a somewhat bleaker picture than the one relayed in Freakonomics.  (Yes, really.)  See, for example, this study.  The results here do not suggest that most people behave selfishly when they can get away with it, but they indicate that significantly more than 13% of us do.  And all the authors did was offer people the opportunity to avoid having someone appointed as a potential recipient of their giving.  The participants’ behavior was still observed by the authors.  According to one study, when participant behavior is truly anonymous, more than 40% of participants in Dictator games keep everything for themselves.  That’s still not the majority, but it’s a big number.
Most people, I’m sure, are not familiar with these results. Â And there’s a case to be made that the more we inform people of such results, the more we encourage them to behave the same way themselves. Â (For what it’s worth, I’ve seen the opposite happen — I’ve had students tell me that, upon learning about the free rider problem and the challenge it poses to collective action, that they will be sure to contribute even more in the future as a result.) Â So should we stop such papers from being published? Â Should IRB boards deny proposals to conduct similar experiments?
If I understand him correctly, PTJ would say yes.  And I respect that position.  But I’m not convinced the corrosive impact of such work is all that large.  I won’t say I believe it to be zero, and I think it’s fair to place some weight on that, but for me, we’d need to be talking about a much bigger corrupting effect before I’d be willing to oppose scholarly inquiry.  There’s value to increasing our understanding of human behavior.  If it turns out that most of us don’t behave like Homo Economicus, but a very significant minority of us do, I want to know that.  Even if it encourages some people to adjust their behavior.  However, if you could show me that the widespread dissemination of this knowledge would cause the majority of the currently well-behaving majority to turn selfish, I’d honestly reconsider.
What about you?
1. Most of what he writes makes infinitely more sense to me when I assume that he views all decision-theoretic models as being populated by Homo Economicus. That he more or less equates Freakonomics with rational choice strikes me as quite telling. Â And when he says that Public Choice is not concerned with collectives because collectives don’t have preferences, there’s obviously a certain truth to this — and a good chunk of social choice is dedicated to illustrating this claim — but I just don’t know how to square such a claim with the prevalence of models containing a “social planner” who seeks to maximize a “social welfare function”. Â I just can’t see how anyone who has read Maggie Penn’s work on identity (see here and here) could talk about decision-theory’s “autonomous” actors being intrinsically incapable of belonging to communities. Â He also writes about calculation as if he doesn’t believe the “as if” assumption is a thing. Unless he means something different by his words than what I take them to mean, he is asserting the impossibility of things that are very much possible.
2. This, of course, presupposes a consequentialist view of ethics. So far as I know, the only argument in favor of deontological approaches to ethics is itself deontological, and the only critique thereof is consequentialist. Thus, I have no idea what a consequentialist like me can say to a deontologist who takes issue with my decision to evaluate “decision-theoretic” work in this way. Or what a deontologist could say to convince me that I should do otherwise.
“In this post, I’d like to discuss the moral argument PTJ makes against decision-theoretic work.”
Sounds good. Alas. there’s nothing “after the break.” Could you re-post?
Yeah, that was a mistake. I’m not done writing the post. Sorry about that!
Interestingly, there are literatures on these subjects that put the ‘invention’ of rational choice theory in a historical context, making PTJs argument about how it influences (and potentially is corrosive to) democratic decisionmaking. Amadae’s Rational Choice Liberalism and Tuck’s Free Riding come to mind as two good books in political science, although there are many more.
One interesting problem in IR, I suspect, is that alternative models of decison-making are rarely taught alongside rational choice theory so that students can assess differences and veracity. In American politics, in contrast, it seems there might be more exposure, for example in debates about political behavior (the rational public and its criticisms).
I want to stick though to my criticism of PTJ, which is consistent with your position, that even if you dislike RCT, there are still reasons to model behavior using RCT in order to show the kinds of problems (e.g., collective action problems) that might emerge from that approach. Contrasting it with alternative models, either from social theory, cross-cultural psychology, or other fields, then provides comparative approaches that can be used as critique.
Interesting post.
I’ll check those out, Eric. Thanks for the suggestions.
You’re right, it is interesting that graduate study of American politics typically involves some discussion of alternative models of decision-making while the same is rarely true of IR. I find most of that literature to be frustrating, since so much of it focuses on knocking down a strawman, but students should definitely be aware of alternatives.
If only we could get students to take 6 years’ worth of classes. :p Or better yet, find a way to squeeze 6 years’ worth of learning into 2 years…
I missed the part where it was established that rationalist exercises like Freakanomics have a corrosive effect on society. The corrosive effect of cosmetics on rabbits’ skin and eyes (to use your example) seems rather more palpable. I’m not sure the analogy holds.
If some genius social scientist were able to develop a plausible measure of the level of self-interested behavior in human societies, I would be shocked beyond description if recent trends in social science have had any discernible effect.
Of course, this leaves open the criticism that, if corrosion of society is a problem, formal analysis is not much of a solution.* I’m not sure what would be, nor am I sure that political science (or academic study of politics, if one prefers), holds a comparative advantage in this regard.
*Though it’s all in the presentation. As the author notes, some of the earliest applications were precautionary (see G.Hardin’s “Ruin is the destination towards which all men rush…”)
I share your skepticism that recent trends in social science have had any but the most negligible effect on aggregate levels of self-interested behavior. There is, however, some evidence that econ students behave more selfishly than other students, and that this isn’t entirely driven by selection (see, for example, this paper https://www.econstor.eu/dspace/bitstream/10419/36229/1/617554374.pdf). I’m not convinced this accounts for enough overall variation for us to really worry about it, and it’s hard to tell if that effect is unique to economics rather than the prevalence of “decision-theoretic” approaches within economics, but it’s part of why I’m willing to assume that the effect PTJ has in mind might exist (on some small scale).
It might be important to point out, though, that one strong (in some areas dominant?) strand of the rational choice tradition is normative instead of descriptive. “Rational agents should do x in situation y.” In this case, PTJs arguments bypass the concerns about actual influence (except for the argument, of course, that academics are inconsequential in general).
Fair point. Most people I know who do this type of work are only interested in descriptive interpretations, but you’re right that the normative strand is out there. And the descriptive stuff is sometimes mistakenly read as normative.
Hey Phil Arena,
Thanks for mentioning my reaction in the post. I appreciate it.
I want to clarify something.
“For example, Tdaxp calls PTJ’s critique of rational choice anti-scientific. But that’s unfair. When faced with a fundamental tradeoff between two values, the person who choose the course of action that best promotes value 1 need not do so out of any particular opposition to value 2.”
If this was the standard by which “antiscientific” was judged, the word also would not apply to Creationists or Flat Earthers, who I also labeled with the term. By “antiscientific,” I meant not a phliosophical opposition to the scientific method as such, but a belief that ideological priors should trump empirical findings in establishing theoretical validity.
This should of course be seperated from the much narrower question of whether a specific action is specifically harmful. For instance, it certainly would be unethical to hire people to commit murder to conduct studies about people willing to murder, but opposition to this is opposition to a specific action, as opposed to a simply sharing a morally problematic “truth.”
Hey, Tdaxp.
I see your point, but in my experience (some?) Creationists do in fact exhibit a philosophical opposition to the scientific method. There are folks out there who are genuinely ANTI-science.
The way I read PTJ’s post, he was expressing concern about harm being done rather than simply putting ideological priors ahead of empirical findings. The question in my mind is whether the harm he envisions occurs on anything more than a trivial scale. I strongly suspect that it does not.
(Btw, you can just call me Phil.)
Hey Phil,
Thanks for the reply. :-)
I’ve known many Creationists, and most (all?) of them hold positions nearly mirror PTJ’s: a general support of the scientific method, combined with a belief that the moral worth of argument provides meaning as to its validity or viability. I’m not sure which Creationist argument you are referring to?
PTJ writes, “endorses and naturalizes a form of selfishness that is ultimately corrosive of human community and detrimental to the very idea of moral action.” The AND seems meaningful. He believes that Rational Choice should be opposed, because knowledge if it is harmful, and because it is idealistically horrible. Even if you reject this, then the argument is still nearly identical to Creationism: accepting XY theory leads to morally bad outcomes in society, because ideas impact actions, and should therefore be opposed,
In any case, PTJ’s argument is as anti-scientific as Creationism.
I’ve only known a few Creationists, and it’s possible that I’ve misinterpreted their position or that they were atypical. I don’t want to push too hard on that point. :)
Creationists are actually half-decent at formal modeling.
1. Start with the certainty that Noah and the ark were true.
2. How did he manage to fit two of everything?
3. Since it is certain that he did, he must have done whatever would maximize the plausibility of that occurrence.
— skipping a step or two here —
5. Ergo, Noah took only baby dinosaurs.
At least, so sayeth the creation museum, just outside Cincinnati.
Surprised to see Freakonomics quoted in the context given the criticism of that work. And by that I mean most economists I’m aware of treat it as a work of weak sociology and certainly not rationalist in any meaningful sense of the word. Also, anti-science comes in many forms. I think PTJ is probably anti-science in two senses; 1) he’s against the application of science in the social sciences (problem here is what is meant by science); 2) he thinks science is useful but not true (probably has no use for that word), hence cuts down the authority of science in terms of moral debate such as abortion etc. I could be wrong though.
In what sense is science true but not useful?
Science cannot have a moral authority — it’s a tool to control, predict, and improve variation. Sometimes it’s really important to do this. Sometimes this can tell us things, like what things are made of. But Science can never tell us what something /is/.
If PTJ would have simply argued that — that Rational Choice is a theoretical abstraction but a fictional one — the controversy would have been mute.
I’m aware that most economists see Freakonomics that way. I quoted it because PTJ referred to it and I thought the story about the guy who delivers bagels helped was pertinent.
The problem, I think, is that no one has a proper definition of what “rational choice” is — which is why I rarely refer to it without scare quotes. Everyone has strong opinions about it, apparently, but hardly anyone seems to agree on what it is.
At the risk of potentially beating a dead horse, or at least a horse that we’ve left on the path behind us, I want to suggest that maybe you and and PTJ are talking past one-another regarding whether it is possible to model altruism.
As I understand it, in rat-choice, altruism is modeled as preferences held by individuals which those individuals then attempt to satisfy through calculation or strategic action. PTJ’s argument is that by defining ‘morality’ as such, we dissolve it. If ‘helping others’ is a preference, then it becomes conceptually identical to preferences like ‘disregard females’ and ‘acquire currency’, and therefore its pursuit becomes as selfish as the pursuit of any other preference. Put another way, all preferences have the same function in orienting action, and it becomes impossible to philosophically distinguish between selfish and non-selfish behaviour.
But of course, in an ordinary language sense of the term, if we see someone list a preference for advancing the wellbeing of others, we understand this as altruism. We connect it to moral action. In a logical sense, we cannot differentiate it from ‘selfish’ preferences because all preferences are selfish, but of course, the meaning of our models goes beyond their logical content.
Am I correct in understanding your ‘knowing’ that modeling morality is possible as stemming from recognition of the above point? The thing is, I don’t think PTJ would disagree with it. Rather, I think PTJ’s point is that by becoming accustomed to conceptually framing morality in terms of a set of weighted preferences the realisation of which we pursue strategically, we are training ourselves to eliminate the ‘sacredness’ of morality. Suddenly it becomes possible to ask questions like ‘how much would I have to pay you for you to spit on your grandmother’s grave?’ Searle phrased it well in _Rationality in Action_ by relating that according to rat-choice, if I value my son’s life and I value a quarter, there are some odds at which I would wager my son’s life for a quarter. Of course, parents lead their children through dangerous and possibly unnecessary exercises all the time – like going swimming – for certain desired benefits, and it is possible to model that behaviour according to the rat-choice logic which Searle finds so absurd. The issue is what such models bring us and what they cost us.*
I agree with you that it should be an empirical question, whether the use of rat-choice modeling actually leads us to think about morality in conceptually incoherent terms, or causes us to begin to weigh moral obligations in terms of preferences in ‘real life’. But I get where PTJ’s discomfort is coming from.
Addendum to Tdaxp:
‘By “antiscientific,” I meant not a phliosophical opposition to the
scientific method as such, but a belief that ideological priors should
trump empirical findings in establishing theoretical validity’
It seems to me that we start with those ideological priors. Our scientific methodologies are developed in order to give us traction for certain kinds of enquiries, and to allow us to usefully answer certain kinds of questions. What makes those enquiries worthwhile? Where do these questions come from? To say that we shouldn’t employ certain methodologies because they are morally repugnant or socially costly isn’t to deny specific empirical findings in the name of ideology but rather to say that the means by which we make these discoveries is bad enough so as to make them not worth it. After all, PTJ hasn’t denied a single specific empirical claim. Your analogy to creationism doesn’t work here, generally speaking, because most creationists are not denying the validity of entire methodologies but rather of particular claims about the world which are putatively justified given those methodologies. Does that make sense? This is why I think Phil is right when he draws a parallel to animal testing; he’s not saying ‘findings generated by animal testing are not warranted’ but rather ‘we shouldn’t generate findings through animal testing because animal tasting is bad’.
*I’m aware of the irony of posing the issue in these terms.
“At the risk of potentially beating a dead horse, or at least a horse that we’ve left on the path behind us, I want to suggest that maybe you and and PTJ are talking past one-another regarding whether it is possible to model altruism.”
Probably so.
“As I understand it, in rat-choice, altruism is modeled as preferences held by individuals which those individuals then attempt to satisfy through calculation or strategic action.”
Incorrect. This is a common view, and I understand why you might hold it. A lot of “rat-choice” work uses such language, particularly the early (classic) stuff. But if actors behave as if they calculated and strategized, that’s sufficient. It really doesn’t matter what goes on inside their minds (or doesn’t) before they do so. A lot of people find the “as if” assumption dissatisfying, and I’m not entirely unsympathetic to that, but a critique of RCT which simply ignores that (very common) assumption isn’t very useful.
Thanks for responding at such length, Phil. I definitely appreciate your thoughtful insights here.
But this seems like a curious statement: ‘A lot of “rat-choice” work uses such language, particularly the early
(classic) stuff. But if actors behave as if they calculated and
strategized, that’s sufficient.’
I’m well aware that formal models such as game theory do not presume any particular theory of mind – evolutionary game theory being case in point. But doesn’t the very name ‘rational choice’ imply a theory of mind? In an ordinary language sense, how many people doing rat-choice don’t start with the axiom of folk psychology? In how many social science game theorist’s minds are the agents of models not actual calculators and strategisers? This is in fact an honest, non-rhetorical question. And for that matter, what exactly is the methodological difference between modeling ‘as if’ behaviour where the ‘as if’ agents are individual or corporate entities and actually really presuming that they’re, in an ideal typical sense, homines economici [sic?]? If your scientific ontology is the same, I’m not sure adding ‘as if’ – ‘as if’ altruists held ‘help others’ as simply another preference – makes the slightest bit of difference since we’re already in ideal type territory.
Even assuming that the ‘as if’ with which you qualify your own work – and I’ve no doubt you have read and thought at length about this – makes the difference you say it does, if the vast majority of social scientists operating from this methodology go beyond ‘as if’ and actually build their models around ideal-typically ‘rational choosers’, whatever that means, then the fact that game theorists (I’m presuming that we’re not talking about all formal modelers, since fluid dynamics or AGM don’t even need to model ‘as if’) could start from different assumptions doesn’t really change PTJ’s criticism, I think. That’s sort of like saying, ‘it’s /possible/ to test these cosmetics on animals in a non-cruel way, so even though almost every tester does do it cruelly, we shouldn’t establish a norm against it’. I wouldn’t categorically dismiss that argument, personally, but I’d be prima facie sceptical of it.
Once again, thank you for your patience and conversation. I am finding these discussions very clarifying and interesting.
Good questions, Simon.
The name “rational choice” seems to imply a lot of things, yes. That’s why I hate it. I really, really, really wish I could ban if from all academic discourse. You will rarely see me use it without scare quotes.
I don’t know what proportion of game theorists believe that people calculate and strategize most of their decisions. All I can say is that the game theorists I know personally — and I certainly don’t know all of them! — frequently tell me that they share my frustrations with the things people assume that we assume.
In some ways, there is little difference between actually calculating and strategizing versus behaving as if you have done so. But in other ways, it makes a difference — not least of which being that some critiques of “rational choice” center precisely on the assumption of calculation. But I understand this might sound like clever word play. The bigger point is that there is no reason whatsoever to assume that the behavior we describe with complex mathematics is produced by agents performing complex mathematics in their minds. To take a few extreme examples, the motions of the planets are described by complex mathematics, but no one believes the planets are calculating anything; similarly, dogs are quite good at catching frisbees, and baseball players at hitting pitches, but no one has ever suggested that a mastery of calculus and physics has anything to do with that. The planetary example is imperfect because we know that the motions of planets are produced entirely by external forces — there is no agency involved, whether it is radically autonomous or minimally so. But the point still stands.
To be clear, I do believe that some people, some of the time, under certain conditions, do indeed think quite deliberately and strategically about their behavior. My point is that we need not assume this is always the case in order to benefit from analyzing models in which actors are assumed to maximize (expected) utility.
Personally, I don’t know how common I believe genuine altruism to be. I’m not entirely sure I believe it exists. But that’s not what’s being argued. PTJ said that it is impossible for such behavior to exist within a theoretical model in which actors maximize utility and his reason for saying so is precisely because of the internal calculations he believes such models require actors to make. But they don’t. When I use calculus to describe the optimal trajectory for a bat to swing if it is to connect with a ball, I do not change the fact that the batter’s behavior was intuitive and reflexive rather that deliberative and calculated. Similarly, if a person walks by a blind homeless man with a box of money in front of him and does not steal the money, indeed never even considered doing so because that would be so wrong as to be literally unthinkable to them, I do nothing to debase them by writing down on a piece of a paper “u_i(steal) = -10; u_i(walk past) = 0”. To suggest that I have is sort of absurd. For the very same reason, it is nonsensical to suggest that writing down such equations in reference to hypothetical actions, rather than those that have actually taken place, rules out the theoretical possibility of moral action.
I see your point about animal testing. Indeed, I tried to acknowledge it above. There is part of me that thinks we should not say things that are equivalent to “you shouldn’t steal because it will give you cancer” — that is, even if we are justified in condemning a behavior, the reasons we offer for so doing ought not be fallacious — but I understand and accept that there is broader point here. If, in practice, most animal testing is cruel, and if there’s good reason to believe that this *must* be so (i.e., if any attempt to reform animal testing so as to make it less cruel would be doomed to fail), then, yes, we absolutely should ban animal testing. That’s why I acknowledged above that if I ever saw convincing evidence that disseminating the results of “rational choice” research substantially altered people’s behavior in an undesirable way, I would reconsider conducing such research. But I have to say that I’m quite skeptical that this is the case.
This is helpful. Also pleased and interested to hear that you see the very label ‘rational choice’ as problematic. It seems to me, given this exchange and your other points in the comments and in this post, that the morality of – shall we say? – game-theoretical approaches in modeling social behaviour hinges upon this question:
‘To what extent does modeling moral behavour as if it were a preference-maximising strategy by a calculating agent lead to the modeler (or anyone) viewing moral behaviour as a preference-maximising strategy?’ Or more open-endedly, ‘does it change how we understand morality and is this change for the worse?’
I agree with you that the answer is empirically ascertainable and should not be presumed. have nothing more of merit to add, but I look forward to seeing if PTJ does.
I think that sums it up well. I particularly like the question of whether it changes how we *understand* morality, as distinct from whether it leads to more narrowly self-interested behavior. (Virtually?) no one thinks that the latter would be good, but reasonable people can disagree about whether society is harmed by a change in the relative prevalence of deontological versus consequentialist approaches to ethics. And while it would surprise me to learn that people who study “rational choice” become more inclined to engage in narrowly self-interested behavior, I’m not sure it would surprise me to learn that it learn that they are more likely to be consequentialists.
I’m not sure it necessarily makes the most sense to see this as a debate between deontological versus consequentialist conceptions of ethics. Though this could just be wishful thinking in a case where my tendency to (rule) consequentialism in most practical ethical dilemmas is in tension with my philosophical and empirical perspective on what morality is.
But it seems to me that most dilemmas are ones of value conflict. That is, I feel that spitting on bubi’s grave is wrong, but you just offered to donate $1 million to my charity of choice and doesn’t this make it absurd for me /not/ to spit on bubi’s grave? So perhaps I’m now calculating, but I haven’t accounted for the origin of these values nor how they acquire their relative weighting. I’m well aware the whole ‘oh hey, constructivism tells us how we get preferences and then we rat-choice them!’ fad in IR, but this perspective is precisely what PTJ is complaining about, I think. Nor have I figured out what precisely this sort of calculation /is/: is it an attempt to realise certain virtues by navigating norm-space? Is it the activation of habituated processes of cognition? Is it ‘slow thinking’? You can black-box this, or you could pick one and try to model it, but perhaps black-boxing or presuming the mechanisms of moral action in these ways are socially or analytically costly.
I’m more inclined to see this discussion more as something like ‘consequentist practical ethics vs socially “thicker” accounts’, which need not presume a deontological approach and its metaphysical and psychological implications.
I’m not entirely sure I follow.
PTJ’s beef with “rational choice” seems to be that it portrays people as incapable of doing the right thing for the right reason, and thus encourages us to behave the same way ourselves. If one expresses fear of marginal shifts towards a world in which everyone tries their best to help others but only because they want the reward of being regarded as the type of person who does such, is that not expressing a fear of consequentialism?
A rejection of consequentialism is not by default an argument for a deontological conception of morality. I’m also not sure that PTJ’s objections to ‘rat-choice’ amount to a rejection of consequentialism in general, since if I understand his critique correctly it’s that no morality is possible, consequentialist or otherwise, for homo economicus. But I think that the reason why he – and I – might prefer ‘team deontology’ to ‘team consequentialism’ is that the former can allow for fully immanent justification while the latter can only be a heuristic for knowing what to do in a particular situation once you already have your definition of the good.
I had understood PTJ’s argument differently, but perhaps I was mistaken. I won’t argue the point further. I think I’ve made clear what my positions are. I’ll leave it there.
Thanks for a pleasant and interesting exchange. I enjoyed it.
“Once again, thank you for your patience and conversation. I am finding these discussions very clarifying and interesting.”
It has been my pleasure. Thank *you* for *your* patience.
“Put another way, all preferences have the same function in orienting action, and it becomes impossible to philosophically distinguish between selfish and non-selfish behaviour.”
Becomes impossible to do so on the basis of intentions, yes. :) I agree completely. But there are other approaches to ethics.
(Sorry to respond piecemeal. This is an excellent comment, and you’ve highlighted several important issues.)
“Am I correct in understanding your ‘knowing’ that modeling morality is possible as stemming from recognition of the above point?”
Not quite, though I can see why it might appear that way. The reason I think PTJ’s point would be incorrect even if we accepted his view of ethics (and, for the record, I don’t) is that his critique only applies if we assume that calculation and strategizing take place. There is no reason to assume such. If an individual acted in a manner PTJ would describe as moral, I could, as an observer, write down a model that described said action in the the language of utility maximization without altering the moral content of the original act in any way.
I understand your point about the sacredness of moral behavior and introducing the possibility of asking people how much you’d have to pay them to spit on their mother’s grave. And I think this is a great way of thinking about the disagreement (assuming we’ve understood PTJ correctly). I’d make three points though: 1) this is really a conversation about Homo Economicus (for whom nothing is sacred) rather than one about utility maximization or rational choice or decision theory; 2) though it is perhaps possible that people’s sense of what is fundamentally wrong–of what is sacred–changes once you introduce them to “rational choice”, this is hardly an established fact; and 3) it is also unclear that people would be incapable of compromising that which ought never be compromised if academics sitting in the White Tower never wrote down models in which it is assumed that some people are capable of such behavior.
I might be wrong, I do not want to put words into his mouth, but I think his central critique of your argument would roughly be questioning whether individual autonomy assumptions, implicit in game theory, are adequate for rethinking ethical questions. This is why he begins by talking about the radical isolation of individuals. In this sense, questions of homo economicus are not quite relevant because any model that posits individual autonomy of choice (or even moral philosophy by extension that highlights individual autonomy) may be problematic.
I suspect, in the end, that there are surprising similarities between your arguments. Rationalists, for example, have a ‘thin’ understanding of individuals, which is compatible with PTJs concerns to minimize the relevance of individual autonomy in order to concentrate on ‘thick’ ideas that orient action.
I would be happy to agree if I understood exactly what “individual autonomy” entails and whether there is, in fact, any reason to say that decision-theoretic accounts necessarily assume it. Thus far, the only arguments I’ve seen to convince me of this seem to presuppose things that are clearly true of Homo Economicus but not necessarily true otherwise. At least, it looks that way.
I think you are right that there’s similarity in our arguments. I think it’s clear that PTJ and I have different views of *ethics*, but it’s not clear that whatever ontological disagreements we do or do not have are substantial.
This makes sense. But, I have difficulty squaring this though with rational choice economic historians, such as Greif’s work on Maghrebi traders or Milgrom, North, and Weingast’s merchants, where the historical narrative really does seem to imply that there are autonomous agents. I suspect that the qualitative rational choice work–of which there is a lot in IR–frequently claims that rational choice models are ‘true’ in the sense that they describe how people reason, and thereby institutions form.
I’m trying to follow your position. Is your idea that rational choice models are ‘as if’ that need to be tested by data in order to see if they provide useful theories? But, the data need not conform to the predictions of theory? Or is it that rational choice models, to the extent that they are deductive, cannot be tested because testing a deductive model is not useful? Therefore, they need not conform to the real world because the real world affords no test?
I have only been following this conversation for so long, so I am not clear on your position.
Fair questions, Eric.
First, I would say that it’s entirely reasonable for someone who has only ever seen white swans to believe that all swans are white. But that’s still a fallacy. In other words, yes, there are people who write about human behavior in a certain way, and some of their work is quite prominent. But we can’t infer from this that all “rational choice” work necessarily must make the same assumptions. (Unless, that is, one defines “rational choice” to mean the body of work that makes precisely those assumptions. Which is fine—but if we do that, we need to acknowledge that the a great deal of work containing formal models in which actors are assumed to maximize (expected) utility is not, in fact, “rational choice.”) Put differently, my position has never been “no one assumes those things.” It has always been “people need to stop saying that anyone who talks about utility maximization is forced to make those assumptions.”
Regarding the value of models, yes, I would say that it is meaningless to “test” a deductive model. One can use models to generate testable hypotheses, if one is so inclined, but that’s just one way to use models. And not necessarily the best one. The extent to which a model needs to conform to reality in order to be useful depends, of course, on the use for which the model was intended. Predictive models (which are rare, for good reason) are of little use if the correspondence to reality is too crude. Models used to generate testable hypotheses are useful insofar as the hypotheses they generate are non-obvious, non-trivial, and survive empirical scrutiny. That is, if a theoretical model causes us to look for an empirical pattern that does indeed appear to exist, and which we might not otherwise have even thought to look, despite containing some assumptions that are either false or non-falsifiable, it’s unclear to me why that should be a problem. But some models are used to give us a starting point. To tell us what we can, and cannot, account for while ignoring all the complexity of reality. Still others serve only as a check on logic. To see whether existing causal explanations for known empirical regularities are even internally consistent. If a model demonstrates that a prominent explanation for an established finding is invalid, then it makes a valuable contribution to our understanding of the world, if only by forcing us to pull up a plank in our pier that we hadn’t realized was rotten.
This makes good sense. Which do you see as the dominant position in IR?
Sadly, I think the dominant position is probably that models are of no use if they can’t be tested. :(
I see. So kinda sort you are close to Clarke and Primo? I’ve seen you positively mention there book before.
Very much so.
Rather than jumping into specific arguments at this point, I’ll just point out that folks who are very concerned with communities, ethics, and rhetoric such as Elinor Ostrom, Deidre McCloskey, and Jack Knight use choice-theoretic approaches — even formal ones — to drive key parts of the their research programs. In fact, their ability to draw from choice-theoretic approaches as part of a normatively-conscious research program is a big part of their appeal. Anyway, those are just a few names of prominent people who are doing work that, at times, PTJ seems to suggest cannot (or should not) be done. Worth mentioning folks like this as the discussion over what “rational choice” can/cannot and should/should not do has gotten quite abstract.
Granted the *modal* “rat-choice” researcher isn’t those people, but the modal researcher in any sub-sub-field is probably not the person to be emulated.