I
n his later writings, Ludwig Wittgenstein spends quite a bit of time thinking about the problem of exactly what we are doing when we refer to the “inner” state of some entity, as juxtaposed with “outer” evidence of that entity’s behavior. We see someone grab at their arm and yell “ouch!”, and then their face contorts in a grimace, but Wittgenstein points out that this is not “outer” evidence of an “inner” mental state called “being in pain.”Rather, the observable behaviors express “being in pain” — at least, they do so for us, since we are part of a form of life in which such behaviors mean “being in pain” — and our conclusion that the person is in pain is not an inference from evidence but, under normal circumstances, simply an understanding of what is being expressed. (This is similar to the way that we don’t usually “infer” from a combination of letters that a particular word is meant; we just read the word as written.)
We do not conclude from behavioral signs that the entity we are interacting with is a conscious human being.
The problem, Wittgenstein points out, is that we ignore normal circumstances when we turn this into a situation of epistemic uncertainty and ask whether we can definitively know whether someone is really in pain or whether they are dissimulating or faking it. “That an actor can represent grief shows the uncertainty of evidence, but that they can represent grief also shows the reality of evidence.” Under normal circumstances we don’t doubt that someone is in pain or is feeling grief when they behave in particular ways, and that in turn is what makes it possible for someone not feeling a particular emotion to act “as if” they were. If we have reason to suspect that someone is lying or pretending, then and only then would we entertain doubts.
Such doubts might be resolved in any number of ways (for example, looking at consistency of behavior across times or contexts), but under normal circumstances, there is no doubt to resolve, because this is not an epistemic problem.
Exactly the same thing is true of our presumption that human behavior is consciously intended or motivated.
It’s always presupposed that the one who smiles is a human being and not just that what smiles is a human body. Certain circumstances and connections of smiling with other forms of behavior are also presupposed. But when all that has been presupposed someone else’s smile is pleasing to me.…I react immediately to someone else’s behavior. I presuppose the inner in so far as I presuppose a human being.
We do not conclude from behavioral signs that the entity we are interacting with is a conscious human being. Rather, we treat the entity as a human being, which presumes consciousness and agency and the rest of the package, and therefore understand certain facial movements as a smile. If we thought that the entity were not a human being, but an audio-animatronic device, we would not treat its movements as resulting from the deliberate choices of an agent, even if they looked “outwardly” more or less identical.
In his engaging and provocative article, Adam Lerner adapts Chalmers’ nine aspects of consciousness (Table 1, p. 267) and ingeniously maps them onto the state (Table 2, p. 274), by way of arguing that if we accept that individual human beings are conscious according to these criteria, we should also accept that states are conscious. To do otherwise, he points out, is “neurochauvinism”: connecting consciousness to the particular physical substrate of the human brain, as though only neurons made for consciousness.
T
he argument hangs together, but Wittgenstein’s reflections point to a key problem with the very notion of measuring these nine aspects (or any other aspects) of consciousness in the first place. Measuring each of these aspects requires an attribution of actor-hood by an observer, and that attribution already presupposes something like consciousness.While this is most obvious in aspect number 4, “self-consciousness” — I mean, the circularity is right there in the name! — the other eights aspects are similar in this respect. To say that an entity displays “voluntary control,” for example, requires the prior assumption that the behavior that we see results from a capacity to make decisions and act on desires rather than simply reacting to the environment. Similarly, determining that an entity has “introspection” or “awareness” requires reading behavioral cues as indicating the presence of some internal process underlying the behavior, but this assumption is precisely what the evidence is supposed to demonstrate.
The humanness of “foreigners” and “aliens” of various sorts has been a perennial concern of politics and international affairs since, oh, forever.
This is why I prefer to treat the question of consciousness — or person-hood, or subjectivity — as a moral issue and not an empirical one. When we give answers to the question of whether entity X is or is not conscious, what we are doing is drawing a line around our operative concept of “beings like us,” which has implications for what kind of engagement with that entity we think is appropriate.
The notion of animal cruelty, for example, presumes subjective consciousness on the part of animals, such that they can feel pain and are therefore entitled to be treated according to a different set of rules than, say, a toaster oven. (Outside of the Battlestar Galactica context, no one worries about being potentially cruel to a toaster.) Although it cannot be proven that an animal is conscious — what evidence would be decisive for such a proof? — it certainly makes a difference whether we treat animals as conscious entities or not.
We might map various moral cosmologies, so to speak, by examining what kinds of entities they place in the “having moral status like us” column and which they do not, using the (non-)attribution of consciousness as one of the key indicators of what kind of moral status is assigned. Of course, there is no reason to limit ourselves to looking at different species in this connection; the humanness of “foreigners” and “aliens” of various sorts has been a perennial concern of politics and international affairs since, oh, forever.
A
ll of which is to say that I am not sure that we can ever provide a definitive answer to the question of whether some entity is or is not conscious. Any available empirical evidence for or against a particular entity’s being conscious can be read either way, depending on which presuppositions we bring into the process. So the question I have for Lerner is, what’s at stake in the determination of whether states are or not conscious? Indeed, I want to take a step back and suggest that we simply don’t need an account of consciousness to have a viable explanatory social science.Certainly to do any empirical work on social phenomena we need a specification of the relevant entities, a kind of bestiary or scientific ontology that we use to survey the landscape and pick out the objects of study. But we need an account of consciousness for such a catalog as little as we need an account of cellular respiration or electromagnetism.
Social analysis basically by definition cuts in at the level of meaningful transactions between entities (as Max Weber actually said, “individuals” are just the sites at which social transactions occur), and anything “below” or “above” that analytical level is only relevant to the extent that it is socially mediated by such transactions. Physical factors matter to social analysis insofar as they have consequences that show up in meaningful transactions, whether we are talking about viruses or nutrients or asteroids and alterations of planetary orbits. To say that meaningful social transactions require that the parties to the transaction have a certain kind of neurological wiring, or that temperatures need to remain within a certain range for those parties to go on living, is interesting, but doesn’t directly contribute to the social analysis.
Would a conscious state act any differently than a state that wasn’t conscious?
And whether an entity is considered “conscious” by a particular social group can be an important part of explaining how that group operates without the analyst having to take any position at all on whether that entity is or is not conscious. The analyst’s own prejudices and prejudgments might get in the way here quite easily, and result in an anthropological mis-statement of the cultural world of a given group — say, a group that regards itself as part of a forest and lives continuously with it. Nor do we need consciousness for a meaningful definition of agency; indeed, the good old Giddensian definition of agency as the capacity to have done otherwise gets us out of the heads of any actors and into the relational and transactional social environment — which is where social analysis belongs anyway.
A
s far as I can tell, precisely nothing is added to a piece of social analysis by the assertion that thus-and-so action was “consciously intended” rather than resulting from the configuration of a set of specified social — and hence at least intersubjective rather than purely subjective — elements. Would a conscious state act any differently than a state that wasn’t conscious? Would society really look all that different if we analysts simply bracketed the question of consciousness altogether, and instead regarded determinations of whether some entity is conscious as telling us something about the practically negotiated boundaries of a social group?That said, Lerner’s article does a nice job laying out the contours of the individualist moral cosmology that so much of our field (and arguably, the whole post-Enlightenment Euro-USian world, which still suffers from a Cartesian anxiety hangover in the worst way and continues to have problems with any analytical or moral starting-point other than “the autonomous individual”) still dwells within. I agree with him that if we accept that account of consciousness, then the reluctance to attribute consciousness to the state (or to any other collective actor) is puzzling: if one kind of individual is conscious, then arguably other kinds of individuals are as well. I’m just not convinced that we should accept that account in the first place, or that we should be trying to answer the question of who or what “really is” conscious.
The better question, to my mind, is how the question of consciousness is answered in practice by different social groups — including, perhaps, by us ourselves — and what practical implications those different answers have.
Have you ever read The Modern Prince by Gramsci? My reading suggests the state has a “personality” that is established over several iterations of sovereign development. France, being France, will behave to an external stimulus in a seemingly predictable fashion because of what Gramsci calls “state spirit.”
Elsewhere, we could apply Heidegger in similar logic. Heidegger says we operate in a ‘referential totality’ that dictates our behavior in a particular environment. I know how to overcome the challenge of stairs in the dark hallway of my home because I’m accustomed to “being” in my own home. If, however, I found myself in an Escher world, I would be annoyed and aggravated by the stairs in a dark hallway because an Escher world is not my referential totality.
Add the two together, and we have a pretty novel way of understanding “the state.” This stuff makes my brain feel good. Great assessment here.
Would talking about these different communities of producing intention (explicitly and implicitly) within the corporate body of the state not close the gap between doing politics and doing political science, contrary to your Weberian inclinations?
@Matt, I’m not sure how it would. Mapping a moral cosmology is different from participating in it, and my observing that X group attributes Y intention to entity Z isn’t a judgment on whether a) X is correct to do so; b) Z actually is conscious; or c) Z actually has Y intention. Engaging in a debate about a) would, I agree, pull us closer to politics, albeit a politics of the practical-moral variety. Debates about b) and c), I would say, might appear to be resolvable epistemically or factically, but this is an illusion. So that leaves us with mapping, empirically rich, but non-judgmental.
My sense of the inescapability of politics builds on your last sentence of your post: “The better question, to my mind, is how the question of consciousness is answered in practice by different social groups — including, perhaps, by us ourselves — and what practical implications those different answers have.” It’s the “by ourselves” that interests me. If the state gets imagined through different collective social acts, institutions, discursive regimes, daily plebiscites, language games, or whatever else, then does mapping became one of those things? Does asserting that one maps without judgment not sense a certain place within the corporate body of the state, even as one claims to do so without judgment? Is this label of “just mapping” not itself an act drawing a permissive boundary towards and within the state? Is the assertion of without judgment more a rhetorical foil than necessarily an actual ability to escape value judgments? Does mapping itself become a particular type of virtue signaling for or assuming specific collective ways of being?
@Matt Ah, I see. certainly it’s possible that “without judgment” could be a more or less covert way of intervening in an arrangement that one only claims to be representing. I would say that we can only prevent that from happening by quite sharply distinguishing scholarly-explanatory language-games from practical-political language-games, and working to make sure that those streams remain uncrossed.
Now, that said, I suspect that *some* crossover is inevitable. A mapping of some group’s answer to the question of consciousness might be a way of shaping that group’s values, or it might be an observation that the group does in fact answer that question in a particular way, and even if it’s both then it’s still each one individually too. We have to be clear on the differences between the domains — wissenschaft vs. politik — in order to prevent epistemic abuses. It’s not about escaping value-judgments as much as it is about sustaining the possibility of disinterested claims.
And certainly what I think is a disinterested claim might not be; that’s what other scholars can and should point out, precisely so we can check one another’s excesses. After all, I would argue, the point of a disinterested claim is to show us where our value-commitments actually are.