This is more of a riff on Phil’s post from last week than a direct reply; the post that Dan and I wrote addresses more directly the issue of actor autonomy that we think Phil misunderstood us on we and Phil were clearly on different semantic pages, so I am not going to go back over that ground here. Instead — and since we all basically agree that rational choice theory, as a species of decision-theoretic analysis, is located someplace in the tension between self-action and inter-action — I want to pursue a more specific point, the criticism of decision-theoretic accounts on both social-scientific and ethical grounds. In terms of the former register, there are kinds of questions that decision-theoretic accounts are simply not adequate to help us address. In terms of the latter register, the naturalization of individual selfishness that is inherent to decision-theoretical accounts regardless of the preferences held by individual actors and how self-regarding or other-regarding they might be, provides an important avenue on which all such theories can be called into question.
At the outset it is important to remember that rational choice theory is, methodologically speaking, a form of analyticism rather than a form of neopositivism. I agree whole-heartedly with Phil, and with Jim Johnson [ungated article, thanks to our friends at Sage!], on this point (which, parenthetically, makes the whole “empirical implications of theoretical models” project a bit of methodological duck-rabbit). Presumptions about actor preferences and how they combine to form strategic situations are not falsifiable hypotheses, but deductive consequences produced from spare assumptions. The disclosure of an equilibrium point in a game is not a falsifiable point-prediction about how actual actors will resolve a strategic dilemma; if anything, as Jon Elster noted years ago, it’s an ethical judgment about what should happen according to a specific notion of rationality. Characterizing a situation as a particular kind of game is not an explanation, but a critical, or maybe a critical-ethical, description. To the extent that this description figures into an explanation, it does so as a kind of baseline against which to assess the importance of situationally-specific factors that produce deviations from that baseline, which is (as I have argued in detail elsewhere) precisely what one should use an ideal-type for.
This in turn means that there are two appropriate lines of criticism for a rational choice account. Phil is entirely correct that most of the charges leveled against rational choice theory are quite off-base, either because they ignore the ideal-typical character of the rationality presumptions that inform a game model, or because they erroneously presume that rational choice theory is about the content of actor preferences rather than about the formal character of those preferences whatever their content. There is nothing inherently irrational about an actor preference schedule that is other-regarding or seeks to maximize benefits to the group rather than to the individual; this gets confusing when people forget that if I do something to benefit the group and I do it because I value those benefits more highly than I value individual returns, I am actually maximizing my own individual utility because I am acting according to my individual preferences. If I derive psychological benefit from your happiness, and I act so as to maximize your happiness, I am not behaving irrationally — far from it, I am behaving very rationally indeed. The confusion lies in the specification of the game that I am actually playing: decision-theoretical accounts demand that the game in question always be specified in terms of individual actor preferences, so by definition the behavior that benefits you in the game has to benefit me more than alternative behaviors would in order for me to rationally engage in it. It makes me feel good to see you smile when I give you flowers, or giving you flowers is a long-term investment that will lead to you doing something for me in the future, and that’s why I give you flowers.
Or, to put this another way: it is a logical impossibility within the confines of a decision-theoretic account for there to be any such thing as altruism, defined as engaging in behavior that does not in some way produce a positive return for the actor, and there cannot be any such thing as moral action strictly speaking. “I follow the moral rule because it gives me satisfaction” is not, as Kant pointed out a long time ago, the same thing as “I follow the moral rule because it is the right thing to do,” so in order to be acting morally in a strict sense, an actor needs to not only ignore worldly consequences but also ignore any personal, psychological benefits of doing the right thing. And it is precisely this that cannot happen in a decision-theoretic universe like the one envisioned in rational choice theory, because action in such an account arises by definition from the interaction of actor preferences and a strategic situation. And this in turn means that decision-theoretical accounts by definition reify and naturalize a particular form of selfishness: the satisfaction of individual desires is the highest form of good, even if those individual desires happen to be arranged such that pursuing them (whether deliberately and strategically or by accident is irrelevant here) generates a better outcome for everyone.
This is of course the standard line of argument supporting a market economy — if we all act on our interests, we generate the best of all possible equilibria because no one has an incentive to deviate from it — and also leads to the more general set of social-theoretical concerns that we might call “the Hobbesian problem of order”: if society is composed of a number of autonomous individuals acting on their preferences, how do we ensure some kind of stable outcome? The market solution is one way to go here, as is the deliberate manipulation of the strategic environment (through laws, price controls, etc.) such that rational action is channeled into particular pathways as actors determine their optimal strategies (again, whether this happens deliberately and consciously or by accident is not relevant). Another solution is “socialization,” which ensures that actors have the same or similar preferences, and will thus choose the same things consistently. In any case, the point is that any decision-theoretic account, whether it fits into any particular definition of “rational choice theory” or not, is going to have the same basic structure, and end up affirming a certain kind of selfishness because the relevant theory of action involves individual actor preferences — and this is the case *regardless* or whether the actors are selfish for individual material gain or selfish for the success of their cause or selfish for the alleviation of gross human rights abuses or whatever. In any decision-theoretic account, individual desire makes the world go around, the interaction of desire plus the strategic situation produces a set of options which can be rationally adjudicated, and moral action is impossible.
And this, in turn, leads to the more properly social-scientific critique of decision-theoretic accounts including rational choice theory: because such accounts depend by assumption on constitutively autonomous actors selfishly pursuing their own desires, they are incapable of explaining fundamental changes in those actors themselves. Desire cannot be endogenized in such accounts without producing insoluble equations, and what an actor wants has to be kept strictly exogenous in order for the game to make any sense. To be clear, it is no problem to endogenize an actor’s preferences over strategy — giving you flowers rather than chocolate can be shown to have arisen from a calculation based on information available to me — but endogenizing preferences over outcomes, i.e. fundamental desires, is the problematic thing. Nesting preferences over outcomes within a bigger game only pushes the problem back one level, so if my desire to make you happy comes from a calculation about future benefits to myself, that only means that the game I’m playing is a more expansive one and in that game we can find my exogenous preferences and true desires. Likewise, alterations to the constitutive autonomy of actors can’t be endogenized; if we want to explain the boundaries that an actor has from a decision-theoretic perspective we have to go back one step and specify other decision-making units for which coming together to form the actor made sense (e.g. states that choose to pool their sovereignty), and those prior units are themselves exogenous to the account. Shifts in the very terms of actor-hood can’t be modeled in a decision-theoretic way; nor can the ongoing maintenance of a particular kind of actor, because that actor is presumed at the fundamental level of the individualist scientific ontology informing the decision-theoretic account. If our scientific-ontological primitives are autonomous actors with desires and preferences, we cannot explain where actors and their desires come from, nor can we forecast how they might change.
Sometimes we’re not interested in fundamental change. Sometimes it’s fine to hold actors and their desires constant. (Legislative behavior comes to mind. So does highway engineering to eliminate traffic hotspots.) In those circumstances, decision-theoretic accounts can be powerful explanatory tools. But in other circumstances, they obscure more than they reveal. And in any case, they uphold as natural and inevitable a certain kind of person-hood that is both constitutively autonomous from other persons and fundamentally interested in the satisfaction of its own desires. So we have to ask both the social-scientific question and the ethical question, I think, in order to appreciate both the strengths and the limitations of decision-theoretic accounts, including rational choice theory.
As Barry O’Neill pointed out to me many years ago, part of the problem here is the conflation of expected utility with experienced utility. That is, Von Neumann and Morgenstern’s original formulation of the expected utility function U required the function to exist and describe the decisions made by an actor; the idea that the decision “brings utility” or some kind of warm and fuzzy feeling to the actor is a post hoc (and unnecessary) addition that should in theory have been cut off by the neo-positivists who claim to shave their theories regularly with Ockham’s Razor. The VNM formulation does, however, require a monadic actor to make decisions, and so individualism is still a baked-in assumption. I’ve always thought the biggest problem comes when methodological individualism slides into ontological individualism; the “what if” assumption of individual decision-making becomes an ethically-valued and unquestioned proposition about the nature of things (or the way things should be).
Clever distinction, but I don’t think i am convinced by it. An expected utility function only describes *decision* (as opposed to behavior) if it provides some grounds on which the actor could or should choose one option over another, and I am not sure what grounds an actor would have for doing so unless one action brought more utility than another to the actor. What does preferring one option over another mean if the actor isn’t comparing the different states of the world and then concluding that in one of them she or he will benefit more? If the actor were able to determine that one course of action was morally right while another was morally wrong, and act on that basis, I am unsure that we’re talking about expected utility in any sense.
Completely agreed about the slip into ontological individualism, though. It’s insidious.
Ah, but the behavior of ants can also be described by expected-utility theory, but we would not describe them as “getting more utility” from one “choice” or another – rather, there is a function U that predicts their behavior under a range of circumstances. Now, this doesn’t make the actions of ants moral when they act “altruistically” for the ant collective, which I think gets at your distinction in another useful way. I do agree that in practice everyone treats, and speaks of, expected utility as experienced utility, and that in-practice decision-theoretic accounts consequently suffer from that certain kind of selfishness. So I guess I’m agreeing, but making a distinction between the lack of (Kantian) altruism in the experienced-utility version and a similar, but distinct, lack of altruism in the expected-utility version. English is failing us here again, of course, since ants cannot be said to “expect” utility.
Yes, good point: genes can also be said to be “selfish,” right? But my bigger concern is that the ants or the genes are being described as behaving *as if* they expected certain utility from pursuing particular courses of action — the function predicts their behavior by connecting inputs to valued states of the world, and whether they subjectively “expect” anything is less relevant than the fact that they behave as if they did. At some level what I am raising, I suppose, is the issue of how odd it is that we make those assumptions about action in the first place.
In any decision-theoretic account, individual desire makes the world
go around, the interaction of desire plus the strategic situation
produces a set of options which can be rationally adjudicated, and moral
action is impossible.
I don’t see that the last clause necessarily follows: If X’s overriding desire is to act morally, then X will derive the greatest utility from acting morally, and moral action is not impossible. (Maybe on a particular Kantian definition it is, but that’s not the same as “moral action” period.)
A recent WaPo article comes to mind about several young people in well-paying jobs who choose to live in such a way as to enable them to give most of their income to charitable etc. causes. Their highest utility is to act morally, ISTM.
“Maybe on a particular Kantian definition it is, but that’s not the same as ‘moral action’ period.” Yes it is ;-) I fail to see how acting to maximize utility has anything whatsoever to do with moral action, *even if* the rational action in question is morally laudable.
Or, to say what I said more clearly upon further reflection: there is a world of difference between acting in a morally praiseworthy way, and moral action. Rational choice theory has no room for the latter, by design.
Ok, I think I understand the distinction as you’ve made it here and also in the post. The distinction turns largely on motives or reasons for acting: if you act b.c it’s the right thing to do, that’s moral action; if you act to maximize satisfaction/utility, that’s not. Which raises the possibility that one might not be able to tell whether a particular act counts as moral action; one needs to know the actor’s reasons for acting. And that might get complicated, I suppose, if a given act has more than one reason or motive. But I do understand the pt you are making here about rational choice theory.
Simply assuming that the only correct account of “moral action” is a Kantian one is a rather large assumption.
“Desire cannot be endogenized in such accounts without producing insoluble equations”
Assuming that moral action doesn’t lead to “insoluble equations” is another large assumption.
Well, I don’t think that moral action is a question of equations. Decision-theoretical accounts kind of have to, which is part of my issue.