A Certain Kind of Selfishness

18 June 2013, 1142 EDT

This is more of a riff on Phil’s post from last week than a direct reply; the post that Dan and I wrote addresses more directly the issue of actor autonomy that we think Phil misunderstood us on we and Phil were clearly on different semantic pages, so I am not going to go back over that ground here. Instead — and since we all basically agree that rational choice theory, as a species of decision-theoretic analysis, is located someplace in the tension between self-action and inter-action — I want to pursue a more specific point, the criticism of decision-theoretic accounts on both social-scientific and ethical grounds. In terms of the former register, there are kinds of questions that decision-theoretic accounts are simply not adequate to help us address. In terms of the latter register, the naturalization of individual selfishness that is inherent to decision-theoretical accounts regardless of the preferences held by individual actors and how self-regarding or other-regarding they might be, provides an important avenue on which all such theories can be called into question.

At the outset it is important to remember that rational choice theory is, methodologically speaking, a form of analyticism rather than a form of neopositivism. I agree whole-heartedly with Phil, and with Jim Johnson [ungated article, thanks to our friends at Sage!], on this point (which, parenthetically, makes the whole “empirical implications of theoretical models” project a bit of methodological duck-rabbit). Presumptions about actor preferences and how they combine to form strategic situations are not falsifiable hypotheses, but deductive consequences produced from spare assumptions. The disclosure of an equilibrium point in a game is not a falsifiable point-prediction about how actual actors will resolve a strategic dilemma; if anything, as Jon Elster noted years ago, it’s an ethical judgment about what should happen according to a specific notion of rationality. Characterizing a situation as a particular kind of game is not an explanation, but a critical, or maybe a critical-ethical, description. To the extent that this description figures into an explanation, it does so as a kind of baseline against which to assess the importance of situationally-specific factors that produce deviations from that baseline, which is (as I have argued in detail elsewhere) precisely what one should use an ideal-type for.

This in turn means that there are two appropriate lines of criticism for a rational choice account. Phil is entirely correct that most of the charges leveled against rational choice theory are quite off-base, either because they ignore the ideal-typical character of the rationality presumptions that inform a game model, or because they erroneously presume that rational choice theory is about the content of actor preferences rather than about the formal character of those preferences whatever their content. There is nothing inherently irrational about an actor preference schedule that is other-regarding or seeks to maximize benefits to the group rather than to the individual; this gets confusing when people forget that if I do something to benefit the group and I do it because I value those benefits more highly than I value individual returns, I am actually maximizing my own individual utility because I am acting according to my individual preferences. If I derive psychological benefit from your happiness, and I act so as to maximize your happiness, I am not behaving irrationally — far from it, I am behaving very rationally indeed. The confusion lies in the specification of the game that I am actually playing: decision-theoretical accounts demand that the game in question always be specified in terms of individual actor preferences, so by definition the behavior that benefits you in the game has to benefit me more than alternative behaviors would in order for me to rationally engage in it. It makes me feel good to see you smile when I give you flowers, or giving you flowers is a long-term investment that will lead to you doing something for me in the future, and that’s why I give you flowers.

Or, to put this another way: it is a logical impossibility within the confines of a decision-theoretic account for there to be any such thing as altruism, defined as engaging in behavior that does not in some way produce a positive return for the actor, and there cannot be any such thing as moral action strictly speaking. “I follow the moral rule because it gives me satisfaction” is not, as Kant pointed out a long time ago, the same thing as “I follow the moral rule because it is the right thing to do,” so in order to be acting morally in a strict sense, an actor needs to not only ignore worldly consequences but also ignore any personal, psychological benefits of doing the right thing. And it is precisely this that cannot happen in a decision-theoretic universe like the one envisioned in rational choice theory, because action in such an account arises by definition from the interaction of actor preferences and a strategic situation. And this in turn means that decision-theoretical accounts by definition reify and naturalize a particular form of selfishness: the satisfaction of individual desires is the highest form of good, even if those individual desires happen to be arranged such that pursuing them (whether deliberately and strategically or by accident is irrelevant here) generates a better outcome for everyone.

This is of course the standard line of argument supporting a market economy — if we all act on our interests, we generate the best of all possible equilibria because no one has an incentive to deviate from it — and also leads to the more general set of social-theoretical concerns that we might call “the Hobbesian problem of order”: if society is composed of a number of autonomous individuals acting on their preferences, how do we ensure some kind of stable outcome? The market solution is one way to go here, as is the deliberate manipulation of the strategic environment (through laws, price controls, etc.) such that rational action is channeled into particular pathways as actors determine their optimal strategies (again, whether this happens deliberately and consciously or by accident is not relevant). Another solution is “socialization,” which ensures that actors have the same or similar preferences, and will thus choose the same things consistently. In any case, the point is that any decision-theoretic account, whether it fits into any particular definition of “rational choice theory” or not, is going to have the same basic structure, and end up affirming a certain kind of selfishness because the relevant theory of action involves individual actor preferences — and this is the case *regardless* or whether the actors are selfish for individual material gain or selfish for the success of their cause or selfish for the alleviation of gross human rights abuses or whatever. In any decision-theoretic account, individual desire makes the world go around, the interaction of desire plus the strategic situation produces a set of options which can be rationally adjudicated, and moral action is impossible.

And this, in turn, leads to the more properly social-scientific critique of decision-theoretic accounts including rational choice theory: because such accounts depend by assumption on constitutively autonomous actors selfishly pursuing their own desires, they are incapable of explaining fundamental changes in those actors themselves. Desire cannot be endogenized in such accounts without producing insoluble equations, and what an actor wants has to be kept strictly exogenous in order for the game to make any sense. To be clear, it is no problem to endogenize an actor’s preferences over strategy — giving you flowers rather than chocolate can be shown to have arisen from a calculation based on information available to me — but endogenizing preferences over outcomes, i.e. fundamental desires, is the problematic thing. Nesting preferences over outcomes within a bigger game only pushes the problem back one level, so if my desire to make you happy comes from a calculation about future benefits to myself, that only means that the game I’m playing is a more expansive one and in that game we can find my exogenous preferences and true desires. Likewise, alterations to the constitutive autonomy of actors can’t be endogenized; if we want to explain the boundaries that an actor has from a decision-theoretic perspective we have to go back one step and specify other decision-making units for which coming together to form the actor made sense (e.g. states that choose to pool their sovereignty), and those prior units are themselves exogenous to the account. Shifts in the very terms of actor-hood can’t be modeled in a decision-theoretic way; nor can the ongoing maintenance of a particular kind of actor, because that actor is presumed at the fundamental level of the individualist scientific ontology informing the decision-theoretic account. If our scientific-ontological primitives are autonomous actors with desires and preferences, we cannot explain where actors and their desires come from, nor can we forecast how they might change.

Sometimes we’re not interested in fundamental change. Sometimes it’s fine to hold actors and their desires constant. (Legislative behavior comes to mind. So does highway engineering to eliminate traffic hotspots.) In those circumstances, decision-theoretic accounts can be powerful explanatory tools. But in other circumstances, they obscure more than they reveal. And in any case, they uphold as natural and inevitable a certain kind of person-hood that is both constitutively autonomous from other persons and fundamentally interested in the satisfaction of its own desires. So we have to ask both the social-scientific question and the ethical question, I think, in order to appreciate both the strengths and the limitations of decision-theoretic accounts, including rational choice theory.