In his latest post, PTJ moves us past the worst critiques of “rational choice theory” and focuses on a few more nuanced concerns.1 I’m glad to see the conversation progressing, and this type of exchange is one of the things I love most about academic blogging. However, I find some of PTJ’s arguments problematic.
I argued last week that most criticisms of “rational choice” amount to attempts to persuade graduate students not to learn a language because its speakers don’t often discuss the right topics. That’s worrisome, because insofar as concerns about the narrow focus of those who speak said language are valid, they form the basis of an argument for learning that language and then changing the conversation far more so than they do an argument against learning the language. But PTJ’s concerns are of a different nature — he agrees that it is inappropriate either to criticize rational choice for developing stylized models of an ideal-typical nature2 or to assume that rational choice theory inherently contains any particular substantive assumptions about people’s preferences. His critique, then, is not about the types of things people discuss when they speak the language of rational choice, but the inherent limitations of the language itself. And to some extent, I think he’s right. But not entirely.
A better metaphor for “rational choice” than a foreign language would be a toolbox. My post last week essentially argued that just because most people who go out and buy the toolbox never use anything but the hammer doesn’t make it fair to say that “rational choice” offers nothing to those who have need of a screwdriver. PTJ’s response, in a sense, is thus: 1) the toolbox doesn’t contain a paintbrush, and so may be useful to those who want to hang pictures or install bookshelves but won’t help anyone transform a room entirely; 2) the very act of giving someone a toolbox encourages them to think the world is full of nails and screws; and, most disturbingly, 3) this encourages people to try to fix what ain’t broken. (Okay, this metaphor’s not perfect either. Bear with me.)
Or, in his own words:
because such accounts depend by assumption on constitutively autonomous actors selfishly pursuing their own desires, they are incapable of explaining fundamental changes in those actors themselves.
This is the paintbrush criticism, and it’s entirely valid. I would quibble with the use of the world “selfishly” here, but we’ll get to that below. The more important charge here is that “rational choice” is incapable of explaining fundamental changes in actors themselves. As I’ve argued elsewhere, it’s actually entirely possible to model preference change. But PTJ would note, rightly, that there’s a difference between building a model in which preferences are allowed to vary over time and explaining why they vary. Appeal to exogenous shocks allows one to account for change in a certain sense, but it’s ultimately ad hoc, and there aren’t many factors of interest in social science that are truly exogenous.3 So while “incapable” might be a hair too strong, I’d readily agree that if you want to paint your room a different color, you ought to look elsewhere.
My real issue is with the second and third points.
PTJ says that “rational choice” does not even allow for the possibility of altruism. Given the abundance of theoretical models claiming to do just that, we need to unpack this a bit. What PTJ really means here is that one must choose between assuming that people maximize utility functions and believing in the possibility of moral behavior — one cannot simultaneously do both. That’s a strong charge, and one that can’t be dismissed lightly (hence the length of this post, for which I apologize in advance). However, it rests upon two problematic premises: 1) that Kant’s view of moral behavior is the only valid one (as PTJ basically acknowledges); and a definition of utility maximization that is so narrow that it implies that decision-theoretic and game-theoretic models don’t actually require us to assume that actors maximize utility.
Even if one does not feel that Kant’s remarks on race call his authority on ethical matters into question,4 and even if we ignore the fact that consequentialist moral philosophy is a real thing, when PTJ says
I am not sure what grounds an actor would have for doing so unless one action brought more utility than another to the actor. What does preferring one option over another mean if the actor isn’t comparing the different states of the world and then concluding that in one of them she or he will benefit more?
he effectively proclaims that the word “utility” must be understood to refer to a concept so narrow that it is of no practical relevance. You see, it turns out that there is a methodology which allows us to analyze the behavior of agents who do what is right simply because it is right and which also happens to be so similar to the one in question as to be indistinguishable from it. Thankfully enough, algebra and calculus work the same way whether the inequalities we manipulate or the functions we maximize contain “utilities” or not. And since all these models really assume is that actors choose the action with the larger number associated with it, all we have to do is call those numbers something other than “utilities” and this critique simply goes away. For example, if we were to assume a world full of pacifists, we might assign the cost terms in standard bargaining models values of positive infinity. Or any arbitrarily large number, really. If we did that, our models would predict that war would never occur. I’m not sure what would be interesting about that, but neither do I understand why it would be incorrect to say that peace prevails in such models because of its inherent moral virtue. Put differently, that we happen to call the numbers we attach to outcomes “utilities”, and that most people alive today associate that word strongly with the work of Bentham and Mill, no more justifies PTJ’s comments than calling scyphozoa “jellyfish” makes them fish. Or fills them with jelly.
This, in turn, leads to my objection to the claim that “rational choice” affirms “a certain kind of selfishness.” If we interpret this claim loosely enough, it’s possibly true. But, again, if one observes that most people who purchase toolboxes never bother to use anything but the hammer, wouldn’t it be more appropriate to remind everyone that there are screwdrivers in there as well rather than advising that no one should buy a toolbox?
The real question here is whether thinking in terms of utility maximization intrinsically promotes selfishness, irrespective of the content of those utility functions. That’s what PTJ implies in his post, after all. The appropriate way to answer this question would probably be with an experiment, but allow me to register my skepticism in the interim.
My colleagues have often told me how surprised they are that I’m not more selfish; that I’m so generous with my time and that my behavior in committees and departmental meetings is the least “strategic” — read, selfish — of anyone in the department. In truth, I am behaving quite strategically, though perhaps a Kantian would think this says bad things about me. I place less weight on my own personal preferences than those of my students and colleagues in part because I think that’s the right thing to do5 (and I’m capable of thinking such thoughts, surprisingly enough), but also, I confess, in part because I know what people think about formal theorists. Because I know that I need to go out of my way to prove to my colleagues that I’m decent human being because they all have really strong priors about me based on the work that I do (which isn’t the least bit offensive). That may not be pure altruism, but if that’s the “certain kind of selfishness” PTJ is worried about, I think a lot of departments would be glad to have more “selfish” faculty members.
Setting my own personal experience aside, I’d note that there’s a great deal of work out there where the actor who is assumed to be maximizing some utility function is not an individual, and so cannot be said to be “selfish” in any traditional sense. I speak not just of states interested in promoting some notion of the national interest, but more so of welfare economics and public choice—works which analyze the behavior of an ideal-typical collective, with a particular focus on what is best for the collective. Granted, much of that work embodies a distinctly utilitarian ethics (and here I do mean that in the sense of Bentham and Mill), but concepts such as positive and negative externalities, so central to welfare economics, are used precisely to illustrate the problems with behaving selfishly. I could go on, but hopefully you get the point.
In sum, I agree that “rational choice” is not the best tool for analyzing preference change. I’m not sure that’s much more damning than saying that hammers are best used for driving nails and screwdrivers for turning screws, but it’s worth acknowledging. As for the claims that models populated by utility maximizing agents intrinsically rule out the very possibility of ethical behavior, and that the analysis thereof promotes “a certain type of selfishness”, I’m unimpressed. Depending on how one defines “ethical” and “selfish”, these claims are either true in a sense so narrow as to be trivial (representing a scathing critique of a method no one really employs) or they are of rather dubious veracity. I’m not here to argue that everyone should embrace “rational choice” or utility maximization or game theory or whatever. Not by any means. But I would very much like to see people stop (intentionally or otherwise) devaluing the work of others by offering invalid critiques.
1. I realize that the repeated use of scare-quote is a bit of an eye-sore, but I remain convinced that the term “rational choice” is very nearly devoid of content. As this post demonstrates, even relatively sophisticated critiques of “rational choice” tend either be right because the critic defines the relevant terms in such a way that s/he must be right, in which case the criticism only applies to a body of work so narrow that it’s unclear why anyone should care, or to be flat-out wrong (for reasons discussed in this previous post).
2. I agree that “rational choice” is a form of analyticism rather than neopositivism. As he notes, there are those who’ve tried to force game theory’s square peg into neopositivism’s round hole, including many in the EITM movement. But he’s not alone in thinking this is problematic.
3. I think this criticism can be oversold — I’m not aware of any scientific theory that doesn’t take as given at least a few things whose very existence is viewed by other scholars as needing explanation. Again I must insist that if our standard is to avoid all simplifying assumptions, we might as well pack up and go home, because we’re playing a game no one has yet figured out how to win.
4. As Henderson notes, Kant himself said that “the Negroes of Africa have by nature no feeling that rises above the trifling”, which is why Kant “advises us to use a spli bamboo cane instead of a whip, so that the ‘negro’ will suffer a great deal of pains (because of the ‘negro’s’ thick skin, he would not be racked with sufficient agonies through a whip) but without dying.”
5. I’ve heard that tenure thing is pretty sweet, and I’m not under any illusions about how heavily anyone will weight my work with graduate students. But I sincerely believe that faculty members have an obligation to help people achieve their intellectual potential, so I never refuse to meet with students who want my help. I’ve had senior colleagues take me out to coffee and tell me “listen, this is a state school, not a small liberal arts college. Maybe you could afford to do that at William and Mary, but you can’t here. I’m just looking out for your best interests here.”
“there aren’t many factors of interest in social science that are truly exogenous” – Really?
That might have been a little strong. I didn’t want to focus too much on this part of PTJ’s critique. But I’d make two points here: in a statistical sense, exogeneity only requires that x->y as opposed to xy. That’s enough to let us say that the relationship between x and y is causal. But to explain y in a satisfying way, we might want more than a causal relationship. We might want to know why x took on the value it did in the first place. If a theoretical model can tell us that behavior will change when some parameter suddenly shifts in value, but can’t tell us when to expect such shifts or why they would occur, that could legitimately be seen as a shortcoming. It’s not damning, but it’s a shortcoming.
Second, some classic examples of exogenous explanatory factors, such as rainfall, have been criticized. See for example:
https://sarso.files.wordpress.com/2011/08/sarsons_aug_draft1.pdf
or
https://www.nature.com/nature/journal/v470/n7334/full/470344a.html
but I do like your closing paragraph
Thanks. :)
I also think (and I’ll drive PTJ mad by pointing this out :)) that Patrick’s argument about altruism only hold up if we assume (and know) the motives and intentions of the actor. Or at least, all the examples he gives depend on them. This is not a problem for me as I insist on the importance of motives; got to be a problem for Patrick though.
Not only their motives and intentions, but also their (possibly mistaken) beliefs about what would happen if they chose one course of action over another.
That said, one might worry about the (alleged!) inability to account for the mere existence of altruism even if one believed it was essentially impossible to determine whether any specific act was altruistic. This is all moot, though, since it’s entirely possible to build decision-theoretic or game-theoretic models in which actors are altruistic. :)
My argument is only about what is required to do a rational choice account, not about what I would do myself ;-)
Help a grad student out, please. The point of ideal-types (be they paradigms or rational choice models) isn’t just to provide a baseline against which to judge deviations (as Patrick claims) but also to generate hypotheses, whether about potential relationships between variables or about the causal mechanisms connecting variables, whose observable implications can be tested empirically. So while perhaps not in itself a neopositivist method, rational choice contributes to neopositivist inquiry. Right? Wrong? This might be a simple point, but I stumbled on it. And my oral exam is in a couple months so guidance would be very welcome.
There are people who think that the only thing any of us should be doing is testing hypotheses. They will tell you that the only value to a theoretical model comes from its ability to generate hypotheses. Theirs is the predominant view in the field at the moment, but it is not the only one. I recommend PTJ’s book as well as Clarke and Primo’s book (Model Politics) for prominent critiques of such a view.
Thanks very much. I’ll check them out.
Yes, what Phil said. The fact that there is a neopositivist misunderstanding of what ideal-types are for does not exhaust what ideal-types are for!
Ok, granted, but the idea that ideal-types are models (or maps, in Clarke and Primo’s parlance) whose criterion is usefulness and not “truth” is an idea found in Waltz and KKV and any number of intro to IR survey courses. So I’m not clear on what constitutes a new and radical epistemological claim here. It’s just that models tend to be more useful when they better approximate / explain empirical reality. I mean, a map that leaves you in the middle of a lake is worse than one that gets you to your destination, as Apple recently found out. And the claim from C&P’s book that the hypothetico-deductive method of science has been totally rejected by natural scientists and philosophers of science after 400 years of astounding success also strikes me as difficult to accept. But this stuff is above my pay scale – I’m just trying to challenge you both a little more to further my own learnin’.
You’re absolutely right that it’s hardly new to point out that models are better judged in terms of usefulness than truth. But I think you’re misunderstanding the arguments.
When you say “models tend to be more useful when they better approximate/explain empirical reality”, you essentially assume that the goal is to generate testable hypotheses that we’ll fail to reject. If you start from the premise that models should only be used in a certain way, you will of course conclude that models that better facilitate that goal are superior. Nothing particularly insightful there.
Your claim about 400 years of astounding success misrepresents what has actually taken place in the natural sciences. The point is that H-D was never a valid description of what folks in the natural sciences were doing anyway. Many of us were told that it is when we were in high school, but any serious examination of the development of these disciplines reveals that it’s a lie. For one thing, many important discoveries were purely accidental/inductive. There was no hypothesis being tested, just a fortuitous discovery. Think penicillin or aspirin, among many others. For another thing, major theoretical contributions have occurred that have not yielded any major observable implications that have been tested. If you look at what’s been happening in physics since Einstein, for example, and you see H-D, you’re not looking closely enough. There’s also been a lot of important work that’s purely theoretical. Think string theory. Sure, there are people who object to string theory precisely because it doesn’t fit the H-D mold, but that hardly proves that H-D is the one true path to knowledge. And the very fact that string theory has figured so prominently in physics, like it or not, invalidates the claim that H-D is how they do things in the natural sciences. (And it’s hardly the only counterexample.)
How can models be useful if they don’t approximate reality well? Here’s a simple example. Dozens of scholars claim that observed correlation between x and y represents a causal relationship — that x leads to y because of premises P1 and P2. Having offered a putative explanation for their observation, they succeed in convincing the field that their finding is probably not spurious. Then someone comes along and constructs a stylized model that contains premises P1 and P2 but does not include other prominent features of the world we actually live in — features that are clearly orthogonal to the original claim — and shows that P1 and P2 do not, in fact, logically imply that x causes y. That would be an important contribution to the literature. It may help pave the way for a proper explanation of the causal relationship (if it is in fact causal), or it might prompt us to realize that there is no causal relationship.
I agree with you that that’s a uniquely useful feature of a deductive model, but your last sentence, isn’t that just another way of saying “it generates new hypotheses”?
I also agree that induction is an important part of the scientific toolkit, but inductively produced hypotheses are just as much subject to falsification as deductively produced ones. Most scientific method combines both deductive and inductive reasoning anyway.
And the accidental discoveries of penicillin and aspirin do approximate H-D, don’t they? There’s a test (presented by a fortuitous observation of the world) for which a hypothesis is formulated post-hoc. After all, when Fleming or Hoffman “discovered” penicillin or aspirin, they didn’t really know anything about their new compounds – just that they appeared to have certain properties. Their hypotheses about those properties were then subjected to further empirical testing in a plain H-D style. And most scientific discovery, political science or otherwise, begins with some observation rather than pure deduction – these are just examples of particularly useful and fortuitous observations, but hardly of violations of H-D. Finally, even if they were violations of H-D, a few counterexamples hardly negate the overwhelming utility of the approach.
As for string theory: every string theorist would LOVE to have the opportunity to test her theory – you’re turning scientific inability into a virtue. Inability to test our theories shouldn’t foreclose the pursuit of knowledge, but we can hardly consider any element of string theory to be a reliably demonstrated representation of how the universe actually is.
Fair points. I wouldn’t say the example I provided is another way of saying “it generates new hypotheses”, but the example I provided doesn’t imply that H-D work lacks value. And neither do the other examples I provided. And I wouldn’t say that H-D lacks utility! You’re right that string theorists would love to generate some hypotheses that survive testing. My concern is that many in this field immediately dismiss any work that doesn’t fit the H-D mold in a very, very, narrow way. It would be almost impossible to publish the equivalent of string theory in political science because we’re so intolerant of work that doesn’t have regression tables full of stars in it. That ours is a field where major journals have at times adopted policies of refusing to even send out for review any work that doesn’t include every step of the process, in the “correct order”, is really disconcerting.
I absolutely agree with you here. Your call for an expansion of methods that count as “scientific” is well taken. Stars on a regression table can’t be the only major criterion for success. I also greatly appreciate the point, often misunderstood or not understood at all, that statistical models are models just as much as formal models are.
Anyway, thanks for letting me pepper (Popper?) you with annoying questions and objections!
My pleasure, ESM. I enjoyed the exchange!
The fact that a map that leaves you in the middle of a lake is worse than one that gets you to your destination says nothing necessarily about whether the map “better approximates empirical reality.” It just says that one map is more useful than the other one (which no one denies; what is at issue here is how to understand that usefulness, and whether “better approximation” does the trick — which I think is problematic, because how are Google Maps a better approximation of reality than Apple Maps are? It’s just the the Google Maps are more useful, but we already said that, so unless “better approximation” *means* “more useful” I am not sure just what is being claimed here). And how a map “explains” is beyond me…since a map isn’t an explanatory instrument, but a practical depiction.
And in any case, when you use a map to navigate you aren’t testing a hypothesis. You are acting according to the directions provided. Different language-game, different standards.
But PTJ can’t even raise the question of why some maps are useful and some not. Which is exactly what science tries to do.
Nah. Science is about providing and refining useful maps. While it might be helpful in some contexts to refine a map by asking why it does or doesn’t work, that is IMHO just an instrumental convenience towards the basic overall goal. Whether it’s “true” or not is beside the point ;-)
Oh, that’s why they get a ‘useful; theory but continue to attempt to see why the theory is useful; so many examples of this it’d get tedious to list even one of them (but Dark Matter, Higgs, viruses (and their treatment)). Instrumentalists confuse science with technology. Technically, you don’t need to know why the bridge will support the given weight, you just need to know the calculations that it will. Scientists want to know why i’ll hold that weight; from what I hear, American bridges could do with a bit more science (bloody instrumentalists). Anyway, useful for who? Utility is a meaningless concept outside of context. And as for truth, well where could I start; confusion of an ontological version of truth with an epistemological would be a first take, but then so would seeing truth in binary terms. Instrumentalism is a dead horse, a relic of all forms of positivism, neo or not, and analyticism. There’s not a serious philosopher of science defending it at the moment, not even Van Frassen; the fact Pol sci still does is shameful. Even Feyerabend thought realism was preferable to instrumentalism, and that should tell you something. Any theory of the human practice that isn’t consistent with the practice of science is deeply problematic. And yes, am I privileging science; you bet. But then everyone I know does, even as they deny doing it. :) And as soon as you reply to this you’ll prove the truth of that statement. :)
We’ve danced this dance before, and I think it’s better in person over drinks ;-)
I certainly advocate the idea of drinks….:)
You are absolutley right ESM. Have a look at heikki patomaki and my piece in ISQ for reasons why. Essentially Patricks analyticism and neopositivism begin to look very similar.
To ESM:
Apologies in advance for butting in here and for being somewhat cryptic (no time to write at length), but you need to understand (if you don’t already) that PTJ is a ‘mind-world monist’ i.e. he doesn’t believe in the existence of a mind-independent empirical reality; that’s one reason he doesn’t like the formulation “better approximate empirical reality.”
Whether there is such a thing as a mind-independent reality is, IMHO, essentially a metaphysical question, and you, ESM, don’t need to get into it in your oral exam. My guess is that your examiners probably do believe in the existence of a mind-independent reality, so, practically speaking, you might be well-advised to avoid the issue altogether.
p.s. Of course there are *other* reasons why one might not equate ‘usefulness’ w ‘approximating empirical reality’, as Phil’s replies indicate. But Phil believes there is such a thing as a mind-independent reality that one might, or might not, approximate. PTJ doesn’t believe that.
(now I have put words into both of their mouths so had better shut up)
True, but you haven’t mischaracterized my position. :)
Ah, thanks, that makes sense. I’m not sure how my committee of five rationalists would feel about mind-world monism, so perhaps I’d better get back to reading about the causes of ethnic conflict before I fall any further down the philosophy of science rabbit hole.
There is no such thing as mind-world monism in practice; only in thought. Does PTJ really believe that there was nothing before mind (not even a God)? Yes, I know Patrick, better over drinks…:)
“In sum… “rational choice” is not the best tool for analyzing preference change.”
Well sure, but to the extent that is true it is equally true that *political science* is not the best approach for analyzing preference change. That’s what psychology is for. But if we want to do it we have some methodologies that can help us, and they are perfectly compatible with “rational choice”. I’ll end on that point, but let me begin with this other thing PTJ wrote:
“Sometimes it’s fine to hold actors and their desires constant. (Legislative behavior comes to mind…”
Since I read that I’ve been trying to think of an example of an ethical behavior or moral act of concern to political scientists for which “rational choice” tools cannot — by definition — provide value. I must confess that I’m having some trouble, and PTJ does not provide an example. It’s quite easy to think of the opposite cases, and PTJ does provide plenty of examples of those, but earlier in the post PTJ suggests altruistic acts, and here’s how he defines it:
“altruism, defined as engaging in behavior that does not in some way produce a positive return for the actor…”
This is, I think, a quite poor definition of altruism. I consulted a handful of dictionaries on this, and they all suggest a definition something like “behavior that benefits a group to which an individual belongs but harms the individual herself.” Such behaviors are quite easily explained in a “rational choice” framework so long as the individual cares about the welfare of the group. In order to engage in altruism one must care about the welfare of the group (otherwise why act altruistically?). So what’s the problem?
Okay, so maybe we need another word. What do we call a behavior that does not in some way produce a positive return (at least in expectation) for the actor? I have no idea. But that’s why I’m not a Kantian. And I think this whole discussion very much turns on what you think of Kantian morality.
Political scientists will usually (not always) be concerned with the first part of Kant’s “I follow the moral rule because it is the right thing to do”, i.e. political scientists will be most often interested in outcomes, but even if they weren’t “because X” is a statement of what one’s preferences are and where they come from: if we want to model a Kantian we just give them a Kantian sense of what the right thing to do is and build a model in which they do it. A more complete account would go “I follow the moral rule because it is the right thing to do and I accept that I should do the right thing.” Once you add on that last bit you’re right back in the choice-theoretic territory.
So as far as I can tell PTJ is wrong to say “in order to be acting morally in a strict sense, an actor needs to not only ignore worldly consequences but also ignore any personal, psychological benefits of doing the right thing. And it is precisely this that cannot happen in a decision-theoretic universe like the one envisioned in rational choice theory”. Sure it can. Just model the actor as a Kantian who places a higher priority on doing the right thing than acting selfishly, as you note in this post. Or take a different definition of morality, as I do.
PTJ writes: “If our scientific-ontological primitives are autonomous actors with desires and preferences, we cannot explain where actors and their desires come from, nor can we forecast how they might change.”
I think I see his point, but evolutionary and agent-based models can use “rational choice” inputs as starting values which are then modified over time in an endogenous process of interaction in a social context. These models aren’t super-common — they’re difficult to build and we’re more often interested in other things — but it’s not fair to pretend that they don’t exist. And it’s not new — Ostrom did some of this — we’re starting to see more and more “rational choice” models that are embedded within structural contexts that are “experience-near” or “social-relational”, e.g. constructivist and network analyses. I think Dan and PTJ were on surer ground a few days ago when they wrote about what choice-theoretic accounts generally are rather than saying what they can/cannot do.
Excellent points, Kindred.
Regarding the first point — I don’t think it’s fair to say that that’s what psychology is for. Jackson and Nexon (https://ejt.sagepub.com/content/5/3/291.short) discuss a relational approach that they say accounts for fundamental changes in a way that “rational choice” cannot. I’m not sure I fully understand their approach, but I’m not in the business of criticizing that which I don’t understand.
I definitely need to read the full article.
Longer reply being written, but: one can’t model an actor “as a Kantian who places a higher priority on doing the right thing than acting selfishly” without doing serious violence to Kant, which is precisely my point. It’s the very act of analytically modeling an actor as an autonomous decider that makes Kantian ethical action impossible for that actor; it’s not the content of the actor’s preferences, but the very fact that in the model the actor brings her or his preferences into dialogue with an environment in a more or less strategic way that means that the actor is not, *by definition*, engaged in moral action defined as doing the right thing because it is the right thing to do. Your little addition — “and I accept that I should do the right thing” — does indeed force us back into choice-theoretic or decision-theoretic territory, and by making “acting morally” a choice instead of a constitutive dimension of action, forecloses the possibility of moral action from the get-go.
You can’t think of an example because this isn’t ultimately an issue of whether something can or can’t be explained in decision-theoretical terms. Almost anything can be, if you’re clever enough — the sole exception, I would claim and did claim, is that one can’t non-tautologically explain the constitution of actors as autonomous decision-making entities in the first place by using choice-theoretic tools. But my broader point is that not everything *should* be explained in these terms. Which is a moral objection, not a (social-) scientific one.
But as I said, longer reply in process, hopefully ready to post in a few days.
I can wait for the full post if you prefer not to answer, but I have a few quick questions. If A is the right thing to do and B is not, and a person does A without ever thinking about B for precisely the reason that A is the right thing to do, they are acting morally, yes?
Now if some external observer writes down a model where A is represented by 100 and B is represented by -100, and this external observer concludes that of course no one would do B, has this external observer somehow reduced the moral virtue of the action s/he observed?
What if the observer said that we can assign a value of m*100 to A and m*(-100) + v*(100) to B, with m representing an individuals moral virtue and v representing their vice, thus allowing their model to account for the fact that some people do what is right (those with greater moral virtue, natch) while others do not? Would they have altered the moral content of the original act by writing down this model?
If the answer to these questions is “no”, in what sense is it correct to say that their model does not allow for the possibility of moral behavior? And if the answer is “yes”, why is that?
Or, to put things simply, are you operating under the assumption that by modeling behavior using utility functions, one must necessarily assume that the actors whose behavior is being modeled make conscious, deliberate choices? Because your response to Kindred seems to imply this. I apologize in advance if I’ve misunderstood you though.
No, I am not assuming that the actors modeled with utility functions must make conscious, deliberate choices. I am merely pointing out that the very fact that we model the situation as involving “actors” “choosing” among “options” precludes the possibility that the actor is engaged in something fundamentally different than the maximizing of utility or the selection of a course of action which she or he values more highly. “Choosing the moral option” isn’t moral action precisely because morality is imperative, so there’s no choice about it — and if there is choice involved, then the morality wasn’t all that moral to begin with.
Part of the issue here is that our very language is so shot through with decisionist assumptions about action being motivated by basically subjective preferences that it’s difficult to even express what I’m trying to express. “Individual people choosing stuff” is basically an amoral approach to social action, in which moral thinking becomes one among other sources of value, and virtue becomes basically just a strong preference. Moral actors (and I am offering a rival ideal-type here, not an empirical description) don’t have utility-functions that feature trade-offs between moral and non-moral behavior; they have greater or lesser degrees of ambiguity about what the moral course of action is, and different ways of resolving that ambiguity. But once it is resolved, they make no choice to act morally — and if they don’t act morally, it’s tragic, perhaps tragic necessity, perhaps life is fundamentally tragic (St. Augustine thought so, Morgenthau and Lebow would largely agree), but tragic all the same. All of this takes place on a planet far away from Planet “Rational Choice.”
Enough for now — I need a lengthier post to spell this out in more-like-sufficient detail.
Thanks for the response. I think I do understand your position, though I look forward to the post.
I’m just not sure that when analysts acknowledge that other courses of action *existed*, that necessarily means that the actors make “choices”. I’m not sure there necessarily needs to be a dialogue between the actors’ preferences and their environment, as you state above. To be fair, that’s certainly how most people who analyze such models talk about their models. Myself included! But I’m not sure that the methodology intrinsically requires it.
PTJ,
Thanks for the replies and I look forward to the longer one. You could probably tell that I was trying to egg you on a bit in the hope that I could better understand your perspective.
As I said in my comment above, I’m not a Kantian so I don’t particularly care whether I do violence to Kant. But I am interested in the question and I think I have a sense of where you’re going. Take the Prisoner’s Dilemma as the simplest possible example. A “rational choice” perspective would “recommend” defecting in a one-shot game given typical assumptions. But the universalized general maxim requires us to cooperate even if the results are, as you say, tragic. Defection is not truly an option *for the person who has previously accepted the universalized general maxim as binding*. (I personally think it’s ludicrous, so I don’t accept it, but perhaps someone out there does. I can provide an example of why I think this if you like, but I’m sure you understand the basic arguments against.)
Kant recognized that humans are rational beings capable of making choices. Autonomy is big to Kant, and a big part of autonomy is decision-making. Kant wanted people to make particular kinds of choices, to the extent that we might be tempted to say that choice no longer truly exists. But the whole of the universalized general maxim is strategic! You look down the game tree at what the outcome would be if everyone behaved the same way. If you like that outcome and it wouldn’t violate others’ autonomy, you take (choose to take?) the action. If you dislike the outcome, you take another.
Right?
More generally, the universalized general maxim is a *decision rule*. Kant does not think people are automatons. He believes that they choose which actions they take. He’s providing them with a criterion for making those choices. How is this antithetical to choice-theoretic methodology? It practically *is* a choice-theoretic methodology.
And I appreciate being pushed ;-)
On this specific point: Kant thought that the will was pure practical reason. The universalized general maxim — the categorical imperative — is indeed, as you suggest, a technology for ascertaining which course of action to take. But I think you mis-state it a bit: it’s not “what would the outcome be if everyone behaved in the same way,” but “can the rule suggesting this course of action be universalized.” Which is fundamentally different. Kant believed — and, just to be clear, I’m not an orthodox Kantian, and I think that this is where Kant falls off the bus — that one could make that determination on the basis of reason alone, because Kant believed that reason was intrinsically universal. I would rather say that such determinations are made contextually, intersubjectively, discursively — but the without losing that basic intention to ascertain what course of action is imperative in that situation.
Now I have basically written much of my post in replies ;-) so give me a couple of days to pull it all together…
Or, to put this another way: Kant’s overall project was trying to square a circle and demonstrate that individual autonomy was not only consistent with universal reason, but was in fact entailed by universal reason. I think he failed, because the circle can’t be squared. You either get complete actor autonomy or broader community standards, but you can’t have both. A society of individuals isn’t a community.
More to the point; why is Kant being given such a fundamental role here. Kant can’t explain agency, choice or free will; that’s the third antinomy. His dualistic metaphysics leaves him in terrible trouble re this, and despite Hume waking him up, he’s still sleep walking. Don’t get dazzled by the lights of so-called ‘great philosophers.
Not convinced that Kant is actually a dualist, but I do think that Kant made some very cogent points regarding the nature of morality, despite his other flaws.
Well I’m not sure how you’d argue against his dualism, but that’s another story. Agree absolutely about the morality issue, but these only work if you drop his metaphysics (which is more evidence of his dualism). But as I say, thats the third antinomy. My point was that irrespective of how insightful his moral perspective might be, it can’t be the standard against which we hold everything up to. He has one position among many.