Phenom Cogn Sci (2008) 7:85–97 DOI 10.1007/s11097-007-9065-z R E G U L A R A RT I C L E

Is moral phenomenology unified? Walter Sinnott-Armstrong

Published online: 28 August 2007 # Springer Science + Business Media B.V. 2007

Abstract In this short paper, I argue that the phenomenology of moral judgment is not unified across different areas of morality (involving harm, hierarchy, reciprocity, and impurity) or even across different relations to harm. Common responses, such as that moral obligations are experienced as felt demands based on a sense of what is fitting, are either too narrow to cover all moral obligations or too broad to capture anything important and peculiar to morality. The disunity of moral phenomenology is, nonetheless, compatible with some uses of moral phenomenology for moral epistemology and with the objectivity and justifiability of parts of morality. Keywords Phenomenology . Moral phenomenology . Emotions . Fittingness . Demands . Approval . Harm No, moral phenomenology is not unified. But perhaps I should elaborate. I begin by explaining what moral phenomenology is.

Moral phenomenology Moral phenomenology might seem to be the phenomenology of morality. That can’t be right, however, because of what phenomenology is. Phenomenology is not a doctrine. It is a method. The method is simple: describe the phenomena. More precisely: introspect on your own experience and then describe what it is like to have certain kinds of experience while remaining neutral on whether such experiences correspond to or accurately reflect anything external. Imaginative variations can be used to test the essence of kinds of experience. Philosophers then try to draw philosophical lessons from those phenomenological descriptions. Husserl invented this method and added many details. Others claim to W. Sinnott-Armstrong (*) Department of Philosophy, Dartmouth College, 207A Thornton Hall, Hanover, NH 03755, USA e-mail: [email protected]


W. Sinnott-Armstrong

do phenomenology when they use quite different methods. My question is only about phenomenology as I just defined it. This method is limited to certain kinds of topics. Nobody would ever study the phenomenology of grass. Grass is a purely physical organism with no mental states, so it has no distinctive phenomenology. Perhaps we could describe what it is like to look at grass (close up or far away) or smell grass or taste grass or feel grass or hear grass played as a musical instrument. We could even talk about what it is like to get high by smoking grass. None of that would amount to a phenomenology of grass itself. To have a phenomenology of something, that something must be at least partly mental. Indeed, it must be wholly mental. Imagine trying to contact a friend by email. You compose the message, push the “send” button, and watch the message window disappear. Your experience so far would be exactly the same whether or not you succeeded in contacting your friend. An ontologically neutral description of your experience should, then, refer only to trying to succeed, seeming to succeed, and believing you succeeded. Success itself, however, is partly external to your experience, so success should not be mentioned in a properly phenomenological description of your experience. There is a phenomenology of trying, seeming, and believing, but no phenomenology of succeeding. Compare also murder, which is defined partly by intention and partly by causing death. If I commit (or witness) what seems to me to be a murder, my experience will be different than if the death seems accidental, so there can be a phenomenology of seeming to commit (or witness) murder. Still, the death and the causal relation between the act and the death are not contained in my experience, so there can be no phenomenology of real (as opposed to apparent) murder. Since phenomenology is of the mental, there also cannot be phenomenology of morality, unless morality is mental. Some subjectivists believe that morality is mental in the sense that morality is constituted by actual beliefs and/or emotions. However, many people believe that morality is no more mental than grass or Platonic numbers. They call themselves moral realists. Even many non-realists see morality as objective or idealized and, hence, distinct from occurrent mental states with their flaws. Since idealized states are not accessible to phenomenology, such moral philosophers imply that there is no phenomenology of morality itself (or of moral duties, virtues, and so on). On all such views, moral phenomenology is not phenomenology of morality. Thinking vs. deciding Nonetheless, even moral realists can accept that there is a distinctive phenomenology of mental states directed at morality. Such mental states come in many flavors, including moral deciding and moral thinking. By moral deciding, I will mean a conscious mental process of choosing to act one way rather than another way because the former is believed to be morally required, good, or better than the alternatives or because the latter is believed to be morally forbidden, bad, or worse than the alternatives. By moral thinking, I mean a conscious mental process of formulating, considering, and/or endorsing a moral belief content.

Is moral phenomenology unified?


Thinking in this sense is distinct from believing, since we can believe something while not consciously thinking about it. Even while asleep, religious people believe that God exists. We cannot study what it is like for the sleeping person to have that unconscious belief. It is not like anything, assuming the sleep is deep and dreamless. Hence, there is no phenomenology of beliefs (including not only belief contents but also belief states). In contrast, we can do phenomenology of thinking. Very roughly, the process of thinking involves several elements or stages: First, a person formulates a potential belief content. Next, that person considers that content (or thinks about whether or not it is true). In some cases, the content seems accurate, correct, or true to that person. Then sometimes the person endorses, accepts, or believes that content. The result is a judgment. Of course, this process is not rigid. Some steps might be missing: thought can lead to judgments that do not seem true to the thinker (such as that my desk is mainly empty space, between its subatomic particles). The steps might be ordered differently: we might believe something before we ever consciously consider whether it is true. And the process can loop back: such as when a considered content is judged false, and then an alternative is formulated, considered, and so on. Despite many such variations and complexities, the phenomenology of thinking describes what it is like to go through such stages or what it is like to reach the final stage of endorsing a judgment. We can also study the phenomenology of considering or of seeming if we describe what it is like during those stages in particular. Although we can do phenomenology of considering, seeming, and thinking in general, we can also ask what is common and peculiar to our experiences when we think about moral belief contents in particular. Then we are doing phenomenology of moral thinking. After a moral judgment is endorsed, if the situation calls for action, agents sometimes decide whether or not to do the act that is judged to be morally bad or good, wrong or right. The moral judgment can be made without any such decision, and the decision can be made without any conscious moral judgment. Hence, to study the phenomenology of moral deciding is different than to study the phenomenology of moral thinking (or considering, seeming, or judging). Which is the topic of moral phenomenology? All of the above. When people talk of moral phenomenology, they have different topics in mind. In this volume, Annas argues, “the virtuous person is not motivated, when acting virtuously, by thoughts part of whose content is the thought of what virtue requires or what the virtuous person would do.” Then she asks, “What, then, is distinctive of the virtuous person's phenomenology when she acts virtuously?” Her question is clearly about moral deciding rather than moral thinking. In contrast, Tolhurst (this volume; cf. 1998) tries to describe what it is like when a moral belief content seems true. It can seem that way, even when the judgment is about someone else’s action, when the act was done by me but long ago, or when it is a general type of act that seems wrong. In such cases, I cannot or need not be deciding whether or not to do such an act. So Tolhurst is concerned not with the phenomenology of moral deciding but, instead, with the phenomenology of moral seeming. Timmons and Horgan (this volume; cf. 2005) discuss views of Mandelbaum (1969), who presents moral phenomenology in terms of a felt demand. The feeling


W. Sinnott-Armstrong

of demand is supposed to be present with or in “direct” moral judgments defined as “those judgments made by an agent who is directly confronted by a situation which he believes involves a moral choice on his part.” (Mandelbaum 1969, 45) Such direct judgments are connected to decision, but the agent still can choose not to act according to the judgment, so deciding and judging remain distinct. Moreover, a demand can be felt long before a decision is made, such as when I think about a moral demand that I review a manuscript before the end of the year, but then wait until next month to decide when to do the work. So Mandelbaum is not talking about the phenomenology of moral deciding (as Annas is). But he is also not talking about the phenomenology of moral seeming (as Tolhurst is), since I can feel a demand when I judge that an act is wrong even if it does not seem wrong, such as when an argument convinces me that it is wrong to eat meat, even though I am attracted to eating meat. Thus, Mandelbaum’s topic is the phenomenology of moral judgment rather than of moral deciding or moral seeming. Since these topics are distinct, when we do moral phenomenology, we need to decide which we are describing: moral deciding or moral thinking (including seeming and judging). Although my main points could easily be extended to moral deciding, I will focus on moral thinking because I am especially interested in whether moral phenomenology can solve problems in moral epistemology, and the phenomenology of moral thinking seems most promising for that purpose. I am also interested in the relation between phenomenology and scientific psychology, and it is much easier to do experiments about moral thinking than about real moral deciding.

Areas of moral judgment Even if we limit our topic to moral thinking rather than deciding, I doubt we can say anything interesting about the phenomenology of moral thinking in general. There is just too much diversity among moral judgments and among the kinds of topics those judgments are about. The point is not that people make different judgments. The point is only that morality covers too many disparate issues. An analogy might help. Try to describe what it is like to love, where that covers my experience when I love my wife, I love my children, I love my sister, I love my pet, I love philosophy, I love golf, I love blues music, and I love ice cream. These different kinds of love are so dissimilar that any description of love that covers them all would be too broad to be useful. In general, there cannot be a useful phenomenology of anything that lacks a certain kind of unity. Morality might seem more unified than love (maybe because “love” is ambiguous in the cited uses, though I doubt that). But the point here is simply that there cannot be a phenomenology of moral judgment if morality lacks the requisite unity. Morality lacks that unity if it divides into areas that are too disparate. Does it? Shweder thinks so (see Shweder et al. 1997). His anthropological studies leads him to divide morality into three categories: the morality of autonomy, the morality of community, and the morality of purity. The morality of autonomy (a.k.a. suffering or harm) concerns acts that limit choice or cause harm. Its rules forbid causing harms of certain sorts in certain ways to certain people in certain

Is moral phenomenology unified?


circumstances. This area of morality is violated, for example, when a robber shoots a victim to steal money. The morality of community (a.k.a. hierarchy) governs loyalty, honor, and modesty as well as respect for equals and proper behavior towards superiors and inferiors in social hierarchies. An example of a harmless community violation might be an adult spitting on his father’s grave. The morality of purity (a.k.a. divinity) governs harmless sexual practices, cannibalism, and so on. Haidt and Joseph (2004) then argue that violations of rules in each of these categories lead to different emotional reactions. Violations of the morality of autonomy produce anger at the agent (and compassion for the victim). Violations of the morality of community issue in contempt for the agent. Violations of the morality of purity lead to the emotion of disgust. I have doubts about these labels for the various emotions. In a lecture (Princeton, November 2005), Roger Scruton said that the habits of cannibals fill us with “horror”. It is not clear to me that the feeling Scruton calls horror is the same as the feeling that Haidt and Joseph call disgust. Maybe when different people think about cannibalism, some people feel horror and others feel disgust. Maybe everybody feels both. Or maybe they are names for the same type of feeling. I also doubt that these categories are exhaustive. Haidt and Joseph (2004) added the ethics of reciprocity, which governs cheating and cooperation, as well as lies and promises. Violations of this area of morality produce feelings of distrust and sometimes anger. I might add the ethics of distribution, which concerns fairness. We can feel resentment (or ressentiment?) when we are disadvantaged unfairly even if we are not harmed, deceived, and so on. Details aside, the main point is that various areas of morality feel very different. Anger does not feel anything like disgust, contempt, and distrust. Just consider what it is like to see or think about or judge or be the victim of a violent robbery. Now imagine eating human flesh or witnessing adult consensual sibling incest. Next consider spitting on your father’s grave or seeing some similar act of disrespect. Finally, compare finding out that your lover lied to you in a way that caused no harm (other than the harm to your relationship when you found out). And compare your reaction when a stranger breaks a trivial promise with your reaction when a close friend breaks an important solemn vow. When I introspect on this variety of cases, it is hard for me to find anything interesting that is common or peculiar to these moral experiences. Moreover, different areas of the brain are involved in judgments about different areas of morality. The brain systems that subserve anger, disgust, resentment, and distrust are at least partially separable (Moll et al. 2008). This diversity in underlying brain structure should not be surprising, since some of these emotions (such as resentment and distrust) are more sophisticated, while others (such as anger and disgust) are more primitive. The latter are associated with parts of the brain that we share with less complex species. Thus, neuroscience, evolutionary psychology, and Shweder’s anthropology all support the variety that introspective phenomenology finds among the areas of morality. There still might seem to be some general feeling common to all of these cases even though they differ in other ways that are more striking. The powerful emotions in diverse cases might cover up the relatively weak but still shared feeling that occurs in all moral judgments. I cannot prove otherwise, but I see no reason to believe in such a common factor.


W. Sinnott-Armstrong

The diversity of moral thinking at least shifts the burden of proof to anyone who claims that something important is common and peculiar to all moral thinking. To meet that burden, moral phenomenologists would need to specify some introspectively identifiable element that is shared by moral experiences in all areas (including autonomy, community, purity, and reciprocity) and that is also missing from nonmoral experiences (including experiences of financial needs, legal obligations, religious duties, and aesthetic norms). After that, moral phenomenologists would still need to explain why that element is interesting. Perhaps they can do all of this, but it is heavy burden to bear.

Fitting demands Mandelbaum (1969) makes a valiant effort to carry this burden. He describes a sense of moral obligation as a felt demand and claims it is based on a sense of what is fitting. However, the notion of fitting is much too broad to capture anything peculiar to morality. A frame can fit a painting physically and aesthetically without being morally good. Moreover, it is not clear what the metaphor of fitting implies. Different frames might fit the same painting, but there is no alternative when a piece fits a certain place in a puzzle. So, is there leeway when an act is morally fitting? And unlike frames and puzzles, what is fitting morally is often negative: not to cheat, not to lie, not to kill, and so on. When I have a moral obligation not to reveal your secret, is every other act fitting? Does my omission of revealing your secret fit every situation I face? Finally, even if Mandelbaum could specify a single relation of fitting, a person’s sense that an act is fitting seems to be no more than a judgment that the act is appropriate. As such, there is no reason to think that any phenomenologically identifiable element of experience accompanies every sense of what is fitting. There is no single way that fitting feels. A felt demand, in contrast, does feel like a demand. A demand is a speech act, which others say to me, so maybe Mandelbaum’s point is that a sense of moral obligation feels as if someone is confronting you and making a demand on you. (Compare Darwall 2004.) However, I feel no such demand when I judge that you have a moral obligation to a third party, since the demand is on you, not me. Similarly, if I now judge that I had a moral obligation to visit my estranged father before he died, then today I will not experience that obligation as a demand on me, for it is too late to fulfill that obligation. Judgments about moral virtues that are moral ideals or supererogatory also do not feel like demands, because they encourage rather than demand. Thus, Mandelbaum’s account is limited to a special kind of moral judgment. Moreover, demands can feel very different, depending on who issues them, how reluctant one is to obey them, and what they are obligations to do or not to do. Imagine that I promise my kids that I will take them to our favorite restaurant when I return from a business trip. I am really looking forward to it and would want to go to the restaurant even if I had not promised. Then I do not feel any demand (in any normal sense) even if I judge that I have a moral obligation to take them to the restaurant. So the notion of demand at best captures the phenomenology of only some but not all moral obligations.

Is moral phenomenology unified?


Another problem for Mandelbaum’s account arises from the differences among areas of morality. I feel a demand when I promise to do something I hate, such as attend a committee meeting. I also feel what might be called a demand when I recognize my moral obligation not to do something that I have no desire whatsoever to do, such as eat my neighbors. But these so-called demands feel completely different. To call them demands is not to describe how they feel. It is not to do phenomenology. Furthermore, the notion of a demand needs to be defined just as badly as what it is invoked to explain. Otherwise, it is either misleading or uninformative to say that what it is like to sense a moral obligation is to sense a demand. So Mandelbaum’s account cannot really solve the problems raised by the differences among areas of morality. Of course, someone else can always suggest an alternative description that is supposed to cover all of the cases. One might refer to a sense of being bound by rules that are beyond my control, but that is so general that it applies as well to rules of chess. Others might invoke notions of approval and disapproval, but I can also approve of a move in chess. Besides, there does not seem to be any introspectively identifiable feeling of approval or disapproval. As Urmson (1968, 57) says, “Whether someone approves of something cannot be a mere matter of simply observable behavior or introspectible feeling.” (Cf. also Foot 1978.) Finally, approval is not just a lack of disapproval, so moral judgments would not all be experienced in the same way even if they were all tied to either approval or disapproval. Moral phenomenology thus faces a dilemma. If the experience of moral thinking is described in narrow terms (such as “anger” or “disgust”), it might be possible to specify an introspectively identifiable feeling, but then it cannot cover the wide diversity of moral cases. On the other hand, if the experience of moral thinking is described in terms that are broad enough to cover all the moral cases (such as “fitting”, or “approval”), then they apply to non-moral cases and do not refer to introspectively identifiable feelings. I cannot prove that nothing could solve this dilemma, but our brief survey makes it hard to imagine how any phenomenological description could capture anything important that is common and peculiar to all areas of moral experience.

Relations to harm Some still might hope to find an interesting phenomenology common to all moral judgments within a single sub-area. Even if a person’s love of ice cream is very different from his love for his spouse, there still might be a unified phenomenology of spousal love. Similarly, even if anger is very different from disgust, there might be something common and peculiar to the kind of anger that accompanies judgments that certain acts violate the morality of autonomy and something else common and peculiar to the kind of disgust that accompanies judgments that certain acts violate the morality of purity. To test this suggestion, I will focus on one area: the morality of autonomy. I chose this area because it is better studied, and also because many people I know are more likely to deny or denigrate moralities of purity and community. Some even deny that these other areas are really part of morality at all. To sidestep such disputes, let’s


W. Sinnott-Armstrong

focus on the least questionable area: the morality of autonomy. Indeed, let’s focus on the clearest and simplest cases in that area: acts that cause physical harm (as opposed to mental suffering, loss of liberty or property, and so on). If there is no unified phenomenology in these core cases, it is unlikely that other areas will be any more unified. The problem is that people can have many different relations to the harm. First consider heroic actions. When we see someone dive into a raging river to save a drowning child from death, we think of this act as morally praiseworthy (or supererogatory), and we feel admiration or even awe. A general phenomenology of the morality of harm would have to cover this case and show what our experience during moral praising has in common with our experience during moral condemning. It is hard for me to imagine any interesting answer. Even if we focus on critical moral judgments, it feels very different to watch someone cause suffering than to see someone fail to prevent suffering. When you cause harm, you ought to feel guilty. In contrast, when you merely fail to prevent harm over a long enough time, you ought to be ashamed of yourself. (See SinnottArmstrong 2005.) What it is like to feel guilty is very different from what it is like to feel ashamed. These emotions do not seem to share any specifiable element that can be identified by introspection. If you disagree, just try to describe what it is. Next, forget about preventing harm and consider only violations of a single moral rule against causing harm by hitting someone in the face. Imagine what it is like when I hit you. Compare what it is like when you hit me. Contrast what it is like when you see a friend hit his child or your child or another friend. Or when you see one stranger hit another stranger. Or when you hear about such hitting long ago. The experience is different when the attack seems justified than when it seems unjustified. It is different when the attack is present than when you remember it happening last week or when you imagine a fictional character causing harm in a way that you judge to be immoral. Many of these cases fit into this diagram: Agent



First person First person First person

First person I harm myself. Second person I harm you. Third person I harm him.

What it feels like to me and to you

I feel stupid. You feel pity or disdain for me. I feel guilty. You feel anger at me. I feel guilty. You feel disapproval of me, compassion for him, and maybe fear for yourself. Second person First person You harm me. I feel anger at you. You feel guilty. Second person Second person You harm yourself I feel pity for you. You feel stupid. Second person Third person You harm him. I feel disapproval of you, compassion for him, and maybe fear for myself. You feel guilty. Third person First person He harms me. I feel anger at him. You feel compassion for me and disapproval of him. Third person Second person He harms you. I feel disapproval of him, compassion for you, & maybe fear for myself. You feel anger at him. Third person Third person He harms another. I feel disapproval of him and compassion for his victim. So do you.

This diagram omits many other feelings and covers up many more subtle differences. The kind of harm (physical pain, social embarrassment, mental anguish, or sexual degradation) affects what its experience is like. The diagram also leaves

Is moral phenomenology unified?


out many cases, such as when harm is caused by an impersonal or institutional problem for which no individual is responsible. Despite such oversimplifications, this diagram reveals enough variety to make my main point: There is no single answer to the question, “What is it like to experience a violation of the moral rule against causing harm?” What the experience is like – the phenomenology of moral experience – varies tremendously with one’s relation to the harm. As before, opponents might reply that there is still a common element – disapproval – that is shared by all of these reactions to causing harm. I doubt, however, that this supposedly shared element is accessible to consciousness. When I imagine the various circumstances in my diagram, I cannot introspect anything to label disapproval. I do disapprove of all of the acts, but that is just another way of saying that I judge them to be morally wrong. There is no way that it is like for me to make that judgment – no common element that could be discovered by introspective phenomenology.

Ways to cause harm There still might seem to be a phenomenology of me causing harm to a third person with no special relation to me. But even here there are differences among cases. Contrast two well-known cases in both of which you harm a stranger. In side track, the only way for you to stop a runaway trolley from killing five innocent strangers is to pull a lever that sends the trolley onto a side track where it will kill one innocent stranger. In footbridge, another runaway trolley will kill five innocent strangers unless you push a large stranger off of a footbridge onto the tracks in front of the trolley so as to stop it. If you push, the one will be killed, but the five will be saved. These cases and your judgments will, of course, feel different if you make different judgments: that killing the one is wrong in one case but not in the other. So suppose you think that neither act of killing is morally wrong. Only about 10% of subjects think this (though more in certain areas). However, even those who think they are morally justified in pushing the stranger off the footbridge still do not experience this judgment in the same way as their judgment in favor of diverting the trolley onto the side track. They feel an emotional reluctance in the footbridge case that they do not feel in the sidetrack case. How do I know? First, I am one of these people, at least when the cases are specified properly, so I know this by introspection. Second, Greene et al.’s (2001, 2004) fMRI studies have shown greater activation in the limbic system and slower reaction times. Critics might respond that this is just because these people recognize at some level that the act really is morally wrong. However, that does not fit my own phenomenology. Moreover, our Dartmouth research group (Schaich Borg et al. 2006) also found differences between cases where acts are seen as wrong. Compare the footbridge case with a diabolical machine that will kill one person unless you push a button that will make the machine kill one other person instead. We found more limbic activity when subjects thought about the former case than when they thought about the latter, even when both acts were judged morally wrong. Exit


W. Sinnott-Armstrong

reports also suggested that the decision in the footbridge case posed difficulties not present with the diabolical machine. Finally, using introspective phenomenology, many people find that the diabolical machine does not raise the same kind of emotions, maybe because there is nothing to be gained, so it is an easy case; or maybe because they can apply a simple rule: don’t kill for no benefit. Our study suggests that emotional circuits get activated when there is no rule to apply. If so, then not all moral obligations feel the same way, even when we consider only cases of me causing harm to a stranger. Finally, moral judgments feel very different after repetition. People who have heard the trolley and footbridge cases hundreds of times are bound to have different experiences than when they first encountered those cases. This commonsense observation seems to be supported by a follow-up study (in progress) where Schaich Borg et al. asked subjects about the same footbridge case two weeks later and got significantly different brain activations. We cannot, then, say what it is like to recognize moral wrongness or a moral obligation without specifying whether the person has encountered the same case or similar cases before.

Does unity matter? The variety within moral experience makes it hard to see how moral phenomenology could be unified. There is no way to describe moral experience in general or even the experience of thinking about causing harm to others. This result might seem disappointing, but it is really not so bad. Phenomenology still might be useful for moral epistemology. If phenomenologists can specify introspectively identifiable features of moral experience that are sufficient to make a moral judgment justified at least defeasibly, then it does not matter whether those features are common and peculiar to moral experience in general or even to a part of moral experience. The judgments with that feature can still be justified even if otherwise similar judgments are not justified and even if very different judgments are justified on the same basis. The lack of unity in moral phenomenology also does not rule out the possibility that morality is unified in the sense that something is common to moral beliefs (true or false) that makes them all moral beliefs as opposed to beliefs of some non-moral kind. Indeed, for all I have said, there still might be a single objectively true moral principle that reveals what really is morally right and wrong in all areas of morality. Judgments based on a principle need not be experienced in any unified way in order for that principle to be objectively true. To see one way in which morality might be unified, compare language. It is hard to imagine a phenomenology of language use in general, since there are so many kinds. Moreover, brain scientists have discovered two areas in the brain, Broca’s area and Wernicke’s area, that support the production and the comprehension of language, respectively. Yet nobody concludes that language is not a unified subject. So we should not jump too quickly to that conclusion about morality either. Still, the disunity of moral phenomenology does raise doubts about whether moral belief is unified enough to be studied scientifically. The problem comes out in research on memory. Scientists distinguish short-term memory (including working

Is moral phenomenology unified?


memory) from long-term memory. Within long-term memory, they distinguish explicit memory from implicit memory. Within implicit memory, they distinguish emotional associations from conditioned reflexes and both from habits and skills, such as remembering how to ride a bicycle. Within explicit memory, they distinguish semantic memory from episodic memory, such as remembering that I had a birthday party as opposed to remembering being at the party. People can have one kind of memory without the others, such as when I remember that I had a party but cannot remember being at the party. Moreover, what it is like to remember that I had a party is very different from what it is like to remember being at the party, which often involves visual images. Both are very different from what it is like to remember how to ride a bicycle. These distinctions, thus, make it hard to imagine a phenomenology of memory in general. In addition, brain scientists have found numerous areas associated with various kinds of memory. As a result, no serious scientist would propose a single study of memory in general. In that sense, memory is not a unified subject. Why is language unified and memory not? One reason is that the same syntactic and semantic rules govern both the production and the comprehension of language. These shared rules unify the topic of language despite the lack of shared phenomenology and the multiplicity of brain systems involved. In contrast, no such rules govern different kinds of memory. If there are rules about how or when memories should be formed, these rules are different for semantic than for episodic memories or conditioned reflexes or skills. In addition, language has a common object. When I can comprehend a sentence, such as “Lance won again”, I can also produce the same sentence, and it remains the same sentence regardless of how I feel while thinking about it. In contrast, semantic memory is about a proposition (that I had a party) whereas episodic memory is about an event or episode (being at the party), and I remember neither a proposition nor an event when I remember how to ride a bicycle (a skill) or when I have implicit memory (habits, emotional associations, and conditioned reflexes). Is moral belief more like language or more like memory in these ways? It is not clear. In a way, moral judgments are all about a common object: how to live. But different judgments are about different aspects of those lives: external acts, internal intentions or motives, lasting character traits, ways of life, institutional arrangements, and so on. Some moral judgments are about harms, whereas others are about personal relationships or act types. And different moral theories and codes emphasize these different objects of judgment. Hence, it is just not clear whether moral beliefs have a common object in the way language does. Moral beliefs still might seem to be unified by common rules, like language. Moral rules are supposed to govern both decisions to act and judgments of others’ acts, much as linguistic rules govern both production and comprehension of sentences. Nonetheless, it is not clear that all competent moral agents and judges must apply the same rules in the way that all competent users of a language must employ pretty much the same linguistic rules. There seems to be a lot more room in morality not only for disagreements about which rules are in force, but also which rules are moral in nature. A striking example is documented by Haidt et al. (1993), who asked upper and lower socio-economic class groups from Philadelphia and Brazil whether certain


W. Sinnott-Armstrong

disgusting acts are immoral. Their examples included eating your pet dog after it was run over by a car and consensual adult sibling incest. Lower class people in Brazil generally saw such acts as immoral, whereas upper class people in Philadelphia generally saw such acts as disgusting but not immoral. The study counted a subject’s judgment as moral if that person held that such acts should be punished (legally or socially) and that it was wrong for other societies to allow such acts, even if they were generally accepted in those other societies. This operational definition does not, of course, cover all judgments that are classified as moral judgments by competent speakers. Some pacifists say that violence is immoral even in self-defense and war, but they do not think that their rule is enforceable. A child spitting on a parent’s grave might also be seen as immoral, even if its wrongness is not seen as enforceable or universal. Still, Haidt’s definition might provide a class of judgments that support scientific laws. However, this is doubtful. Different moral prohibitions that fit his definition have different phenomenologies, are supported by different brain structures, originate from different social processes, and get applied to cases differently by different people. Such a definition, then, hardly makes morality look unified in any important way. Of course, moral philosophers could stipulate that moral judgments must have certain properties (including formal properties like prescriptivity, universalizability, and supremacy, as well as substantive properties such as relating to well-being or enabling smooth interpersonal interactions). They could then go on to make claims about the class of judgments so defined. There is nothing wrong with stipulation, if it is recognized for what it is. They simply need to admit that their theory applies to only part of what is often taken to be morality. Scientists can also start with stipulation. They can stipulate that their theories are about only a certain class of beliefs that they call moral beliefs. If their theories reveal interesting truths about those beliefs, then it might not matter much that their claims do not apply to other kinds of belief that others often call moral. Such scientists still need to be clear about what they are up to. Mineralogists can stipulate that by “jade” they will mean “jadeite”. Then they can go on to formulate significant laws about jade so defined. But that still won’t be a science of jade in its usual sense, which includes both jadeite and nephrite. Nor will it show that jade in its usual sense is unified. Analogously, if scientists stipulate that moral beliefs are X, they might be able to produce a psychology, neuroscience, or biology of X and hence of moral beliefs so defined. However, that still will not be a science of moral belief as a whole in the normal sense. Nor will it show that moral belief is any more unified than moral phenomenology. It still might turn out that no scientific laws apply equally to all kinds of moral belief. That possibility should be taken seriously.

References Annas, J. (This volume). The phenomenology of virtue. Phenomenology and Cognitive Science. Darwall, S. (2004). Respect and the second-person standpoint. Proceedings and Addresses of the American Philosophical Association, 78, 43–60.

Is moral phenomenology unified?


Foot, P. (1978). Approval and disapproval. In P. Foot, Virtues and vices and other essays in moral philosophy (New ed., pp. 189–208). New York: Oxford University Press. Greene, J. D., Nystrom, L. E., Engell, A. D., & Darley, J. M. (2004). The neural bases of cognitive conflict and control in moral judgment. Neuron, 44, 389–400. Greene, J. D., Sommerville, R., Nystrom, L. E., Darley, J. M., & Cohen, J. D. (2001). An fMRI investigation of emotional engagement in moral judgment. Science, 293, 2105–2108. Haidt, J., & Joseph, C. (2004). Intuitive ethics: How innately prepared intuitions generate culturally variable virtues. Daedalus, Fall 2004, 55–66. Haidt, J., Koller, S., & Dias, M. (1993). Affect, culture, and morality, or is it wrong to eat your dog? Journal of Personality and Social Psychology, 65, 613–628. Mandelbaum, M. (1969). The phenomenology of moral experience. Baltimore: Johns Hopkins University Press. Moll, J., Oliveira-Souza, R., Zahn, R., & Grafman, J. (2008). The cognitive neuroscience of moral emotions. In W. Sinnott-Armstrong (ed.), Moral psychology, volume 3: The neuroscience of morality. Cambridge, Mass.: MIT Press. Schaich Borg, J., Hynes, C., van Horn, J., Grafton, S., & Sinnott-Armstrong, W. (2006). Consequences, action, and intention as factors in moral judgments: An fMRI investigation. Journal of Cognitive Neuroscience, 18, 803–817. Shweder, R. A., Much, N. C., Mahapatra, M. M., & Park, L. (1997). The “big three” of morality (autonomy, community, and divinity) and the “big three” explanations of suffering. In A. M. Brandt & P. Rozin (eds.), Morality and health (pp. 119–169). New York: Routledge. Sinnott-Armstrong, W. (2005). You ought to be ashamed of yourself (when you violate an imperfect moral obligation. Philosophical Issues, 15, Normativity, 193–208. Timmons, M., & Horgan, T. (2005). Moral phenomenology and moral theory. Philosophical Issues, 15, 56–77. Timmons, M., & Horgan, T. (This volume). Prolegomenon to a future phenomenology of morals. Phenomenology and Cognitive Science. Tolhurst, W. (1998). Seemings. American Philosophical Quarterly, 35, 293–302. Tolhurst, W. (This volume). Seeing what matters: Moral intuition and moral experience. Phenomenology and Cognitive Science. Urmson, J. O. (1968). The emotive theory of ethics. New York: Oxford University Press.


Is moral phenomenology unified? - [email protected]

Phenom Cogn Sci (2008) 7:85–97 DOI 10.1007/s11097-007-9065-z R E G U L A R A RT I C L E Is moral phenomenology unified? Walter Sinnott-Armstrong Pub...

158KB Sizes 0 Downloads 0 Views

Recommend Documents

No documents