In his most recent post, PTJ argues that “things like Freakonomics are basically corrosive and should be opposed whenever practicable”.  While he repeats in that post (and the comments section) a number of dubious claims about what sorts of behavior are possible within a decision-theoretic framework, I think we’re past the point in the conversation where it is useful to argue about the possibility of writing down a decision-theoretic model whose actors are capable of moral behavior and belonging to communities.1 In this post, I’d like to discuss the moral argument PTJ makes against decision-theoretic work.

Let’s set aside debate about what sort of behavior decision-theoretic accounts might potentially allow for. When someone claims that something I know to be possible is impossible, I consider it appropriate to object, but it’s clear that what concerns PTJ most is not whether it is possible to model certain things, but the impact “decision-theoretic” work is actually having on us as human beings.

Some might find it distasteful to suggest that we refrain from conducting research for any reason other than a demonstrated lack of scientific merit to such research.  For example, Tdaxp calls PTJ’s critique of rational choice anti-scientific.  But that’s unfair.  When faced with a fundamental tradeoff between two values, the person who choose the course of action that best promotes value 1 need not do so out of any particular opposition to value 2.  Nothing PTJ said implies that he does not value scientific inquiry–only that he is willing to place limitations on it when its conduct impinges on certain moral values.  But so are we all, to some degree.  Unless you favor removing all restrictions on human and animal research, you–like me–are in the same broad camp as PTJ.

The question, at least in my mind, isn’t whether it’s ever appropriate to limit scientific inquiry for ethical reasons—it’s how corrosive the conduct and dissemination of “decision-theoretic” research really is.  Well, that, and how much insight we might gain if we were willing to accept that cost.2

Let’s go back to Freakonomics. In the first chapter, Levitt and Dubner discuss an informal experiment conducted by Paul Feldman, who delivered bagels to office buildings and left a box nearby for voluntary payment.  After finding that empty boxes got raided, as did cans with plastic lids, he decided his “honor system” needed a more secure way of storing cash.  After moving to secure wooden boxes, he found that about 87% of people pay for their bagels, with some slight variation over time and across workplaces.  Levitt and Dubner suggest that this tells us the answer to the question raised by “The Ring of Gyges” — most people, it seems, do not act immorally even if they have every reason to believe they can get away with it.

But of course, we don’t actually know how many could-be bagel thieves would have gotten away with it.  That Paul Feldman was not there to demand payment before handing over the bagel doesn’t exactly mean we’re talking about the Ring of Gyges. The question of how differently people behave when they’re not being observed has spawned a cottage industry — and some recent papers paint a somewhat bleaker picture than the one relayed in Freakonomics.  (Yes, really.)  See, for example, this study.  The results here do not suggest that most people behave selfishly when they can get away with it, but they indicate that significantly more than 13% of us do.  And all the authors did was offer people the opportunity to avoid having someone appointed as a potential recipient of their giving.  The participants’ behavior was still observed by the authors.  According to one study, when participant behavior is truly anonymous, more than 40% of participants in Dictator games keep everything for themselves.  That’s still not the majority, but it’s a big number.

Most people, I’m sure, are not familiar with these results.  And there’s a case to be made that the more we inform people of such results, the more we encourage them to behave the same way themselves.  (For what it’s worth, I’ve seen the opposite happen — I’ve had students tell me that, upon learning about the free rider problem and the challenge it poses to collective action, that they will be sure to contribute even more in the future as a result.)  So should we stop such papers from being published?  Should IRB boards deny proposals to conduct similar experiments?

If I understand him correctly, PTJ would say yes.  And I respect that position.  But I’m not convinced the corrosive impact of such work is all that large.  I won’t say I believe it to be zero, and I think it’s fair to place some weight on that, but for me, we’d need to be talking about a much bigger corrupting effect before I’d be willing to oppose scholarly inquiry.  There’s value to increasing our understanding of human behavior.  If it turns out that most of us don’t behave like Homo Economicus, but a very significant minority of us do, I want to know that.  Even if it encourages some people to adjust their behavior.  However, if you could show me that the widespread dissemination of this knowledge would cause the majority of the currently well-behaving majority to turn selfish, I’d honestly reconsider.

What about you?

 

1. Most of what he writes makes infinitely more sense to me when I assume that he views all decision-theoretic models as being populated by Homo Economicus. That he more or less equates Freakonomics with rational choice strikes me as quite telling.  And when he says that Public Choice is not concerned with collectives because collectives don’t have preferences, there’s obviously a certain truth to this — and a good chunk of social choice is dedicated to illustrating this claim — but I just don’t know how to square such a claim with the prevalence of models containing a “social planner” who seeks to maximize a “social welfare function”.  I just can’t see how anyone who has read Maggie Penn’s work on identity (see here and here) could talk about decision-theory’s “autonomous” actors being intrinsically incapable of belonging to communities.  He also writes about calculation as if he doesn’t believe the “as if” assumption is a thing. Unless he means something different by his words than what I take them to mean, he is asserting the impossibility of things that are very much possible.

2. This, of course, presupposes a consequentialist view of ethics. So far as I know, the only argument in favor of deontological approaches to ethics is itself deontological, and the only critique thereof is consequentialist. Thus, I have no idea what a consequentialist like me can say to a deontologist who takes issue with my decision to evaluate “decision-theoretic” work in this way. Or what a deontologist could say to convince me that I should do otherwise.