A Cognitive Prototype Model of Moral Judgment and Disagreement

Debates about moral judgments have raised questions about the roles of reasoning, culture, and conflict. In response, the cognitive prototype model explains that over time, through training, and as a result of cognitive development, people construct notions of blameworthy and praiseworthy behavior by abstracting out salient properties that lead to an ideal representation of each. These properties are the primary features of moral prototypes and include social context interpretation, intentionality, consent, and outcomes. According to this model, when the properties are uniform and coherent, they depict a promoral or immoral prototype, relative to the orientations of the properties. A promoral prototype is represented by an action that is supported by the culture, intentionally benevolent or other-regarding, consensual, and resulting in positive outcomes. An immoral prototype is an action that is condemned by the culture, intentionally malevolent or self-serving, lacking consent, and resulting in negative outcomes. It is hypothesized that moral prototypes will result in a high level of agreement and require effortless processing. Alternatively, when properties conflict or the situation deviates from the prototype, a nonprototype will result. It is hypothesized that nonprototypical situations will act as a source of moral disagreement and may require more effortful processing.


INTRODUCTION
By the spring of 2005, an unusual family and federal legal battle concerning the Schiavo rightto-die/right-to-live case created national news. Debates about Terri Schiavo's plight were heated, ubiquitous, and intensely political. According to reports (Goodnough, Carey, Jordan Sexton, & Yardley, 2005;Kampert & Martinez, 2003), Terri suffered a cardiac arrest, which left her brain damaged and in a persistent vegetative state that rendered her incapable of thought or emotion. From 1998 to 2005, Terri's parents and her husband, Michael Schiavo, fought over the guardianship rights that would ultimately determine Terri's fate. Michael Schiavo claimed that his wife had said on several occasions that she did not want to be dependent on life-supporting machines, which was the basis for his decision to remove the feeding tube that kept Terri alive. Terri's parents, however, continued to believe that there was hope and tenaciously challenged Michael Schiavo in a series of legal battles. In the end, the courts sided with Michael Schiavo and the feeding tube was removed. Terri died on March 31, 2005. Throughout this ordeal, Americans grappled with many questions and points of view. Some believed that Terri had the right to die if that is what she wanted and not to honor her wish would condemn her to a life of prolonged suffering (McNelis, 2003). In a New York Times editorial entitled "Terri Schiavo and the Moral Divide," an anonymous physician wrote, "The primary consideration when determining whether someone's life should or should not be artificially prolonged in a vegetative state should be the individual's expressed wishes" ("Terri Schiavo," 2005, p. 16).
On the other side of the moral divide were people (particularly conservative and religious groups) who felt that Terri had the right to live and believed that we had a moral obligation to keep her alive. For example, James Q. Wilson (2005), public policy scholar and author of The Moral Sense, argued that the feeding tube should not have been withdrawn from Terri Schiavo, because the whole family did not consent to removing the tube. In addition, he claimed that there were doctors who believed that Terri's condition could improve, even though the chance of this happening was extremely slight. Given that there was a smidgen of hope for some recovery, Wilson maintained that removing the tube could be construed as murder, depending on the circumstances. Finally, as did many others who were familiar with the case, Wilson speculated what Michael Schiavo's intentions were as reflected in the following statement: [Terri's] parents have begged to become her guardians. Her husband has refused. We do not know for certain why the husband has refused. I doubt that he wished to receive for himself the money that still exists from her insurance settlement and, apparently, he has offered to donate that money to charity. Perhaps, being a Catholic, he would like her death to make him free to marry the woman with whom he is now living. Or perhaps (and I think that this is most likely the case) he does not want to live what strikes him as an intolerable life. (p. A16) Jeb Bush, the governor of Florida, who exercised his political muscle for the purpose of saving Terri's life, claimed that this case was the toughest issue of his career (Goodnough et al., 2005). Not only was this case rife with conflicts pertaining to the intentions of the key players, consent issues, and outcomes that pitted prolonging life against ending suffering, but within the social context, people argued over conceptual meanings. Some people asked, What does it mean to be in a persistent vegetative state? Is it murder to remove a feeding tube from a person in a persistent vegetative state? What does it mean to live a quality life?
The Terri Schiavo case is just one scenario among many that produced moral discord at a national level. Should we engage in stem cell research? Is abortion murder? Was it right to send U.S. soldiers to Vietnam? Such questions have led to debates and conflicts among people living in the same social milieu. And yet there are other moral situations that result in little contention or refutation. Why?

THE PURPOSE OF THIS REVIEW
It is believed that morality is a universal human phenomenon (Kohlberg, Levine, & Hewer, 1983). Some have speculated that the development of innate processes and our interaction with 2 LARSON the social environment have contributed to the evolution of a moral system that enables us to strengthen social relationships, identify and contend with the cheaters in a given society, and ultimately improve our chances for survival via cooperation (Kohlberg et al., 1983;Krebs & Denton, 1997;Krebs et al., 2002;Pinker, 2002;Waal, 1996;Wilson, 1993). Although humans possess the ability to construct moral knowledge, we do not always form the same moral conclusions, as depicted by the Terri Schiavo case. In addition, not every moral issue requires the same magnitude of cognitive processing. Some moral judgments feel automatic and effortless, whereas others may require us to grapple for extended periods. Understanding the inconsistencies in moral judgment is necessary, because diverging conclusions about highly charged, moral situations often lead to conflict, and in some cases may incite acts of violence. As such, it is important to learn how psychological processes drive moral judgments and how moral disagreement becomes manifest.
The overarching goal of this review is to explore the following questions: How does moral disagreement become manifest? Is moral agreement possible? If so, under what conditions? The cognitive prototype perspective argues that moral judgments are affected by specific properties of moral situations. In other words, these properties, which are abstracted as we construct moral knowledge and accumulate moral experience, become salient within the context of making judgments and have a bearing on how we reason about issues in the moral domain. The relationships of these properties lead to the construction of categories that delineate what is considered morally "good" or praiseworthy and what is morally "bad" or blameworthy. Within this view, moral situations that do not fit neatly into these categories (i.e., situations deviate from the prototype) are more prone to moral disagreement.

CONTEMPORARY FORMULATIONS OF MORAL DISAGREEMENT
For centuries there have been great debates about the nature of morality and the sources of moral disagreement. At one end of the spectrum, it is believed that morality is a rational event, and as a result of pure reasoning one can discover universal moral truths that will ultimately resolve moral disagreements (Kant, 1785(Kant, /1998Kohlberg et al., 1983). Supporters of this point of view believe that an objective, universal moral code exists. However, at the other end of the spectrum, perpetuating, interminable moral disagreements have fueled a position that there are no universal moral truths (i.e., all people do not share the same moral code). Supporters of this position believe that local conditions, cultural worldviews, and one's passions shape our moral sensibilities (Haidt, 2001;Posner, 1999).

Conflict
Although the specifics vary, a number of scholars agree that conflict is a source of moral disagreement. For instance, philosopher Alasdair MacIntyre (1981) claimed that morality cannot be a rational enterprise, because moral disagreements, viewed as competing moral goods, are COGNITIVE PROTOTYPE MODEL OF MORAL JUDGMENT 3 incommensurable when they are pitted against one another. Thus there is no objective measure that concludes that concerns about justice outrank concerns about survival or that issues pertaining to equality are more salient than liberty; yet we attempt to make moral judgments as if there are impersonal and objective criteria.
Shweder and his colleagues (Shweder et al., 1987;Shweder, Much, et al., 2003) have also supported the idea that moral disagreement becomes manifest via a conflict of moral goods. In his big three theory he has contended that within cultures, people will construct notions of what is morally praiseworthy and virtuous, which can be categorized into three predominant ethicsautonomy (emphasizing the individual, one's will, and personal preferences, such as rights and freedoms), community (focusing on the interdependence of persons, one's role in the group, duty, and the integrity of the collective), and divinity (emphasizing concepts of the natural or sacred order, sanctity, and tradition). According to this theory, moral disagreement can result when there is a conflict between ethics or moral goods, such as duty versus individual rights.
Sociocognitive domain theorists (Nucci, 2001;Turiel, 2002;Turiel et al., 1991;Wainryb, 1993b) view moral disagreement as a result of competing informational assumptions and sociocognitive domain conflict. This theory suggests that, under the course of development, we construct distinct, qualitatively different categories of social knowledge, which include the coordination of a moral domain (issues pertaining to welfare, harm, justice, and rights), a social convention domain (issues pertaining to socially agreed upon rules and uniformities that lead to predictable and structured spaces), and personal jurisdiction understanding (issues pertaining to what falls within the jurisdiction of personal choice). According to this theory, overlapping or competing domains create nonprototypical or multifaceted issues that are prone to disagreement. For instance, abortion pits the welfare of the fetus (i.e., the moral domain) against the mother's choices about her body (i.e., the personal domain; Smetana, 1981).
In his social intuitionist model, John Haidt (2001) argued that moral disagreement becomes manifest when there is a conflict of intuition. Haidt described an intution as a "sudden appearance in consciousness of a moral judgment, including an affective valance (good-bad, like-dislike), without any conscious awareness of having gone through steps of searching, weighing evidence, or inferring a conclusion" (p. 818). As its name implies, this model deemphasizes reasoning and proposes that moral judgment is largely an automatic response influenced by affect and intuition, which precedes rationalizations or justifications for the judgment. In other words, we intuit first and then reason later. According to this model, conflicting intuitions can arise within a person or between persons and act as the source of moral disagreement. Churchland (1996Churchland ( , 1998 believes that people generate perceptual and behavioral prototypes, which require repeated exposure and practice of relevant moral categories. These prototypes are continuously readjusted, particularly in the aftermath of experienced failure. He believes that moral complexity arises when stimuli are interpreted as ambiguous or when they activate more than one prototype. Churchland argued that moral disagreement becomes evident when stimuli do not elicit the same activation patterns in different people.

Culture
Shweder and his colleagues (Shweder, 1984(Shweder, , 2003a(Shweder, , 2003bShweder, Balle-Jansen, & Goldstein, 2003;Shweder et al., 1987;Shweder, Much, et al., 2003) and John Haidt (2001) have pointed to the importance of understanding how worldviews can affect moral judgments, 4 LARSON particularly when people are making diverging judgments about the same phenomenon. In other words, what people believe to be true, valued, virtuous, and desirable affects how a situation is interpreted and can result in moral disagreements between people who hold different worldviews.
Not only do views vary between cultures, they also can vary within cultures. Turiel et al. (1991) found that moral judgments within a specific culture can be heterogeneous and vary across different domains. In other words, rather than judge social issues via a predominant stereotypic cultural orientation (i.e., such as "individualistic" societies more often make rightsbased decisions), his research has found that justifications for judgments vary to include orientations that may be thought of as "collectivistic," "libertarian," "traditional," "authoritarian," and so on (see Turiel, 2002, for a discussion). In addition, Turiel (2002) argued that within social hierarchies, one finds that those at top of the social hierarchy often provide different moral justifications for their actions than their subordinates. For instance, those at the top tend to emphasize language that represents autonomy and individual rights.
Churland (1996,1998) believes that moral disagreement is influenced by a changing culture that must confront new issues as it evolves. He claimed that moral progress consists of gradual adaptations shaped by a collective life that is responding to scientific and technological advancement. This progress is an accumulation of collected and recorded social experiences, which build on a collective sociomoral knowledge. In other words, Churchland (1996Churchland ( , 1998 believes that as society changes, we experience new dilemmas associated with the development of modern technologies and the evolution of conceptual meanings. For example, modern technologies enable us to sustain the lives of persons in vegetative states. These changes in our society are often accompanied by a set of outcomes that require us to rethink former concepts or to create new ones. (How do we define "person"? Who has the right to make decisions that will sustain or end such a life?) As our social context evolves, we are placed in a position that requires us to refine our moral concepts as we apply them to new situations.

Processing
Haidt's (2001) significant contribution to moral psychology is in drawing awareness for how moral judgments do not always undergo a deliberate reasoning process. He believes that moral judgments begin with a quick, automatic intuition for what is right or wrong, which is influenced by affect and neurological functions. Although people can override intuitions when the dilemma requires the brain to engage in more effortful processing, the brain will attempt to conserve its resources by resorting to default intuitive responses. Churchland's (1996Churchland's ( , 1998 theory also describes situations in which judgments can be quick and not subject to the weighing and searching involved in deeply reasoned responses. His moral network theory suggests that quick judgments result when stimuli closely resemble prototypically constructed concepts of what is "morally bad." When stimuli deviate from the prototype, more effortful processing is required to form a judgment.
The overlapping ideas of these competing theories suggest that moral disagreement can be influenced by innate processes, cultural knowledge, and specific features of moral situations. As a result of distilling these significant contributions into a workable framework, one can conclude that a good theory of moral disagreement must be able to explain the role of conflict and its relationship to moral judgment. In addition, it must consider the role of culture and demonstrate COGNITIVE PROTOTYPE MODEL OF MORAL JUDGMENT how cultural knowledge, in its various forms, influences our interpretation of situations. Finally, it should be able to account for how some issues result in quick moral judgments, whereas others require more deliberative thinking.
In the next section, I propose a new model that will attempt to explain the roles of culture, innate processing, and conflict in moral judgment. In addition, this prototype model identifies the properties of moral prototypes.

MORAL PROTOTYPES AND THEIR PROPERTIES
Before I begin, it is important to note that philosopher Paul Churchland (1996Churchland ( , 1998) developed a prototype-influenced theory called the moral network theory. In his moral network theory, Churchland (1996Churchland ( , 1998 uses research in neurobiology to create a theory of metaethics. He contended that, in the physical world, specific skills are acquired as a result of training neuronal networks to respond to sensory input. Likewise, we acquire moral knowledge the same way; we develop skills of perception, manipulation, recognition, and behavior that enable us to navigate the social world. Although Churchland did not explicitly define morality, he suggested that the ability to discriminate between right and wrong is an essential component of moral judgment. In the process of acquiring moral knowledge, neuronal networks are trained to partition an abstract, multidimensional, conceptual space into hierarchical categories that discriminate between relevant categories (e.g., "morally insignificant" vs. "morally significant," morally "bad" vs. "morally good"). Furthermore, like cases are clustered together and contribute to the development of a central "hot spot" that represents a prototypical instance of that category. In sum, moral perception is believed to involve the activation of neuronal patterns that closely resemble the prototype as information is assimilated to the nearest of the available prototypes.
One problem with this framework is that Churchland's theory may not be a true prototype theory as it is defined by scholars who study concepts and categorization (Markman & Gentner, 2001;Medin, 1989;Medin & Smith, 1984). According to this body of research, there are two predominant categorization models: the exemplar model and the prototype model. As opposed to classical views of categorization, which maintain that categories have distinct boundaries that consist of a set of fundamental characteristics that are common to all members in the category, the exemplar and prototype models depict categories as "fuzzy" or ill-defined. Therefore, category membership is based on a correlation of attributes that characterize typical instances of that category. For example, the classical view holds that boots have a set of characteristics that clearly distinguish them from shoes. In this view, boots have unique properties that shoes lack, such as height. However, in reality, we know that some footwear resemble both shoes and boots. The exemplar and the prototype models are better equipped to account for these unclear cases. In other words, these models determine which category best represents the stimulus based on the assumption that better members will share more characteristic properties than poorer ones.
What differentiates the exemplar model from the prototype model is in how mental representations are structured. The prototype model assumes that, as one accumulates experience with the examples of a category, a summary representation is formed as a result of abstracting out the central tendency of salient properties. This summary or ideal representation becomes the prototype, which is attributed as having the most characteristics in the category. Stimuli are 6 LARSON compared to the prototype, and those with highly correlative features are likely to be classified as a member of that category.
The exemplar view, on the other hand, holds that there is no summary representation. Instead, a category is represented by multiple exemplars. Thus, any new stimulus will be compared to other known examples of that category. For instance, if a grocer suddenly encounters novel produce and does not know whether to categorize it as a fruit or as a vegetable, she would engage in a process that compares the unidentified produce to stored exemplars of fruits and vegetables. If the novel produce more closely resembles a particular fruit, say a banana, and does not closely resemble any of the vegetable exemplars, then it will probably be categorized as a fruit.
The reason why Churchland's theory may not represent a true prototype model is that it does not explicitly identify the abstracted properties of what is morally bad or morally good. In an analogy using the classic example "bird," several hierarchical categories are constructed as relevant to identifying birds. These categories may include "animal" versus "not animal"; "bird" versus "animals that are not birds"; and within the category "bird," specific birds like robins, chickens, sparrows, and hawks. According to the prototype model, the central tendency of properties of "bird" (e.g., sings, wings, flies, lays eggs, etc.) are abstracted, and the summary representation of these attributes will become the prototypical bird. As a result, exemplars that share more characteristics with the prototype will be closer to the prototype than others. For instance, a robin may be a better representation of a prototypical bird than a penguin.
In Churchland's (1996Churchland's ( , 1998 model, hierarchical categories are also constructed: "morally significant" versus "morally insignificant"; "morally bad" versus "morally praiseworthy"; and within "morally bad" actions like lying, cheating, and betraying. However, unlike the previous bird example, Churchland does not identify the abstracted characteristics of the morally bad or morally good prototypes. Instead, his theory discusses the clustering of like cases, which is a feature of the exemplar model. For example, Churchland claims that similar actions (such as lying, cheating, and betraying) represent a category of vices. However, relying on actions like lying, cheating, and betraying to represent the "morally bad" category may not be a reliable means for categorization, particularly because there are times when these actions can also be viewed as morally good actions (see Flanagan, 1996, for a discussion on lies). For example, one could lie to an aggressor in order to save the life of his victim or lie to a friend in order to plan her surprise party. These types of lies are qualitatively different than lying to cheat someone out of his money. Therefore, it is important for a theory of moral prototypes to identify the abstracted properties of "morally bad" and "morally good" in order to develop a greater capacity to predict how people will respond to and discriminate between particular situations.
As Nussbaum (1993) proposed in her defense of nonrelative virtues, to determine which moral features are objective, we must observe how human experiences overlap as a starting point. The initial challenge in understanding how moral disagreement becomes manifest is to look beyond the actions in themselves, such as slavery or infanticide, in order to collect a substrate of elements that inform moral judgments within a context of local influence. This substrate of elements, I argue, consists of the following moral properties: social context knowledge, intentionality, autonomy/consent, and outcomes. When we encounter moral stimuli, I propose that these properties are abstracted, coordinated, and used to discriminate between what is morally praiseworthy and morally blameworthy.
I propose that several salient properties are abstracted and used to discriminate between what is morally praiseworthy and morally blameworthy. A review of the empirical research suggests that social context, intentionality, autonomy/consent, and outcomes are the properties that influence our moral judgments.

Social Context
Beliefs about what is sacred, desirable, true, virtuous, and beautiful are socially constructed and transmitted as local knowledge within a particular time and space (Geertz, 1973;Shweder, 1984Shweder, , 2003aShweder, Much, et al., 2003). The knowledge we gain from this social experience influences how we interpret situations and has a bearing on judgments and behaviors. Within the social context, we ask and respond to a wide variety of questions such as, When does life begin? When does one become an adult? Is there an afterlife? In the Terri Schiavo case, we asked, What is a person? What does it mean to live a quality life? The narratives that emerge from such questions can vary across cultures, subcultures, and even generations, resulting in different interpretations of phenomena as shaped by perceptions of a natural order, the metaphysical, history, science and technology, economic systems, political systems, religion, and the environment.
In some instances, diverging beliefs about what is sacred, desirable, true, virtuous, and beautiful can lead to highly contested debates. For example, debates about female genital circumcision (FGC) often involve two perspectives concerning what is believed to be "true" about the practice. Some believe that FGC poses physical dangers, oppresses women, has a negative impact on women's sexual functioning and ability to experience pleasure, and is a coercive action that is often inflicted on those who are too young to fully consent (Nussbaum, 1999;Okin, 1999). Others scholars believe that FGC does not pose serious health risks; symbolizes a rite of passage for young women; and is tied to notions of beauty, femininity, strength, and psychological well-being (Leonard, 2001;Shweder, 2003b).
Not only do knowledge conflicts become salient in highly controversial issues, but also one can observe how the social context has an impact on the taken-for-granted decisions we make in our daily routines. For instance, beliefs of what is sacred, desirable, true, beautiful, and virtuous provide a framework for how we arrange our sleeping spaces. In a study that compared the sleeping preferences of informants in Orissa, India, and Hyde Park, Illinois, Shweder, Balle-Jensen, et al. (2003) found differences in how the informants extract moral meaning from a sample of hypothetical sleeping arrangements. The informants in Orissa were more likely to endorse co-sleeping arrangements with children and tolerated the separation of mother and father at bedtime. Principles associated with chastity anxiety, hierarchy, and protection of the vulnerable emerged from the Oriya responses, whereas the middle-class Americans were more concerned about fostering independence and self-reliance in their children and promoting a healthy marriage as signified by parents who share the same bed in a room apart from the children.
According to Shweder (2003a), cultural knowledge becomes embodied and "constitutive of (and thereby revealed in) a way of life" (p. 11). Thus, the influence of the social context involves more than the mere location of an individual, as moving from one environment to another does not necessarily lead to a transformation of these beliefs. Reverting back to the sleeping arrangement example, in the United States, Asian Americans are approximately 3 times more likely to co-sleep with an infant than Whites (Willinger, Ko, Hoffman, Kessler, & Corwin, 2003), even though some groups warn parents of the dangers associated with Sudden Infant Death Syndrome and suffocation (McKenna, 2002). Thus, cultural practices and beliefs can be strong and persistent despite external pressures to change. 8 LARSON Anthropologist Clifford Geertz (1973) believes that abstracting humans from their social context only renders them as arbitrary caricatures, which leads to a gross misrepresentation of human life. From this perspective, Geertz claimed that empty universals such as "marriage," "religion," or "property" do not represent the same content from culture to culture and do little to add to our understanding of how people conceptualize reality and the practices manifested in this reality. For example, to say that "all people marry" is misleading, because the term "marriage" imposes ethnocentric meaning onto populations that may not necessarily share the same concept. In addition, it is bereft of content, because it does not provide us with information about what marriage means under different contexts. That is, it does not tell us anything about the conditions and circumstances that lead people to prefer one type of relationship to another (e.g., preferences for polygamous relationships vs. monogamous relationships). Because there is such a strong interconnection between human life and context (i.e., man shapes culture, and culture shapes man), new theories of morality must take into account how local knowledge influences moral judgments.
A body of empirical studies have been conducted to develop an understanding of the role of local knowledge in the formation of moral judgments (Turiel et al., 1991;Wainryb, 1993b); Wainryb & Ford, 1998). In this context, local knowledge is referred to as informational assumptions. Turiel (2002) wrote, People's assumptions about reality, which come from various sources, also must be taken into account in understanding how they come to decision. When applied to moral and social decisions, such assumptions function as an informational kind-what I will refer to as informational assumptions. . . . Informational assumptions are not solely particular facts derived directly from some kind of data-gathering process. Such assumptions can be derived from conceptual systems and theories. However, knowledge derived from conceptual systems-scientific or otherwise-is used in an informational-factual sense (often with ambiguities) in situations involving moral judgments. (pp. 143-144) In other words, these facts manifest from a reality or worldview as experienced in a particular social context. For example, informational assumptions about when life begins and what constitutes a person have been found to bear on how adolescents and young adults reason about abortion (Smetana, 1981;Turiel et al., 1991). In addition, in a study that asked sixth-grade, 10th-grade, and undergraduate university students to evaluate acts that potentially included harm or injustice, Wainryb (1993a) found that a significant number of participants changed their evaluations after these acts were depicted under different circumstances and the informational assumptions were made apparent. For instance, some students said it was unacceptable for a father to spank his child in order to make himself happy. However, they judged spanking as more acceptable when they learned that the father believed it was the only way to remove an evil spirit from a misbehaving child.
In sum, the social context influences how a situation will be interpreted via a culture-, subculture-, or generation-specific lens. The social context shapes our notions about what is sacred, desirable, true, virtuous, and beautiful, which leads to prescriptions about social roles and obligations. These notions coalesce into a worldview or an ideology that informs our moral decisions and provides a framework for identifying the virtues and vices in a given society (Lakoff, 2002); thus, moral prescriptions, in a given social context, often assume valences of right or wrong. For example, some people may believe that it is virtuous to marry for love or COGNITIVE PROTOTYPE MODEL OF MORAL JUDGMENT wrong to place a parent in a nursing home. Although social prescriptions have a strong bearing on moral judgment, they in themselves are not the only salient factors in moral judgment making and do not function in isolation.

Autonomy and Consent
There are moral issues in which the lack of consent has spurred serious discussion. For example, the Wall Street Journal produced a series of reports called "What They Know" (2010), which documented the relationship between new technologies and privacy concerns as more Internet companies covertly track our Internet usage and collect personal information. Reverting back to the Terri Schiavo case, consent was also a salient factor. Some people wanted to know whether "pulling the plug" was one of Terri's wishes.
One premise of this article is that morality is a social phenomenon that functions to promote cooperation and survival; build relationships; and foster the physical, mental, and spiritual wellbeing of groups and individuals (Kohlberg et al., 1983;Krebs et al., 2002;Pinker, 2002;Shweder, Much, et al., 2003;Waal, 1996). In this view, morality is a system of interpersonal relationships that is other-regarding (Nucci, 2001;Nucci & Weber, 1995) and moral situations require us to think about how actions directly or indirectly affect others. In many social circumstances, an agent initiates an action that affects a target. For example, a terrorist group abducts two civilians, or a doctor uses a patient to test an experimental drug. In some cases, the target (i.e., person[s] affected by the action) accepts or tolerates the agent's actions. However, there are also situations in which the target does not acquiesce while the agent pursues the action against the target's will. Some philosophers argue that unjustified force or coercion is immoral and advocate for a certain level of autonomy. For example, Gewirth (1986) believed that freedom and well-being are necessary conditions for successful action; thus, these conditions are important to the conception of fundamental human rights. In his Formula of Humanity, Kant (1785Kant ( /1998 said, "So act that you use humanity, whether in your own person or in the person of any other, always at the same time as an end, never merely as a means" (p. 38). This principle has been interpreted as emphasizing the respect of rational beings in their ability to make decisions for themselves and to share in deliberations. Using force or deceiving another for the purpose of promoting self-interest manipulates the wills of others and does not allow others to reason freely, thus resulting in a moral violation against humanity (Korsgaard, 1998).
There is a body of developmental research that supports claims for autonomous choice and decision making. Although this particular research neither fully defines what consent is nor identifies the characteristics of those who are in a position to consent, the research does reveal and justify an area of personal agency. In other words, it identifies the circumstances under which people perceive the will of the target to be respected. Reverting back to social domain theory, the personal domain has to do with actions that are believed to be outside of the jurisdiction of social regulation (Nucci, 2001;Smetana, 2005;Turiel, 2002). These are activities that do not threaten the welfare of others, pose rights or fairness issues, and fall outside of the socio-conventional domain. In a sense, the domain of personal jurisdiction is a constructed boundary between the self and the group concerning a specific sphere of actions, which typically include preferences and choices pertaining to privacy, leisure, control over one's body, and friendship. For example, informed consent is a salient moral issue in fields concerned with one's body and personal information. Professionals working in such fields as bioethics, medical ethics, 10 LARSON research ethics, and human sexuality often grapple with defining consent, who gets to consent, and the degree to which there is adequate disclosure (Christensen, 1988;Delany, 2008;Emson, 2003;Harris, 2003;Horvath & Giner-Sorolla, 2007;Reamer, 2013;Sieber, 1993;Swan, 1979;Walker, 2013).
According to social domain theory, this area of personal discretion does not develop as a result of innate claims for control but rather as a result of a construction of the child's claims and the interactions and negotiations of other members of society (Nucci & Weber, 1995). Affording the self with jurisdiction over decisions about actions in the personal domain is thought to contribute to identity formation, personal integrity, individuality, and agency. Nucci (2001) wrote, "The personal represents the set of social actions that permit the person to construct both a sense of the self as a unique social being . . . and the subjective sense of agency and authorship" (p. 54). In developmental studies of the personal domain, it has been found that children and adults rarely argue about moral issues and children are more likely to endorse the parent's or teacher's authority when it comes to concerns about welfare, rights, or justice (Nucci, Camino, & Sapiro, 1996;Nucci & Weber, 1995;Smetana, 2005). Instead, the majority of parent-child conflicts emerge from disagreements about issues that fall within the personal domain.
In discussions about culture and morality, personal freedoms and a conception of the self as a unique individual have been associated with a Western morality that emphasizes rights and entitlements, whereas non-Western societies have been characterized as cultures that foreground duties and interdependence in their moral discourses (Shweder, Much, et al., 2003). However, Ryan and Deci (2000) believe that a certain level of autonomy is a universal organismic need. According to them, autonomy should not be confused with Western characterizations of independence and individualism. Instead, autonomy refers to volition and the freedom to self-organize behaviors and experiences in a way that endorses one's actions and integrates them into a sense of self.
Perceptions of autonomy needs are also evident in cross-cultural research on the personal domain, which supports the notion that the development of personal choice and agency in children is not limited to Western cultures. Thus, although activities categorized as personal vary according to culture and context, there is evidence to suggest that setting limits to what groups may impose onto the individual may be a universal phenomenon (Hasebe, Nucci, & Nucci, 2004;Helwig, Arnold, Tan, & Dwight, 2003;Nucci & Smetana, 1996;Smetana, 2005;Yau & Smetana, 2003a, 2003b. Thus, there is evidence that cultural and contextual variations influence the expression of personal jurisdiction and consequently may influence how we differentiate which actions are legitimately subject to norms and authority.
The research on the personal domain suggests that, at times, the goal to protect the welfare of others or to promote justice will legitimatize the control from a higher authority; thus, it is plausible that the consent of an individual or group may not always be warranted, particularly when it involves a moral concern, such as public safety. Because issues that fall within the personal domain are perceived to be associated with personal choice and preference, using undue authority to achieve a particular end against the will of the target may lead to perceptions of wrongdoing. In other words, if there is a set of actions that is not subject to social regulation and an agent engages in these actions by using force, deception, or coercion, then it is predicted that the role of consent will bear directly on moral judgments. Thus, identifying an area of personal jurisdiction provides us with an idea of where consent issues are likely to be salient. For example, in a society in which the choice of a spouse is a personal domain issue, members of that society might view forcing a person to marry against her will as a moral transgression.

Intentions and Outcomes
When people talk about moral issues, they may consider the relationship between the agent's intentions and the inferred or actual outcomes of the action. Oftentimes, they might ask the question, Why? In the Terri Schiavo case, we asked why Michael Schiavo wanted to take Terri off life support. Did he want insurance money? Did he want to remarry? Did he want to end suffering?
It is believed that asking "why" is a fundamental human quality that has considerable implications for how we structure our lives. The responses to these questions not only facilitate our understanding of the environment, events, and human behavior but also have a functional capacity to guide future actions and construct judgments of responsibility that promote or thwart particular behaviors (Weiner, 1985(Weiner, , 2001. In the moral domain, answering why questions specifically enable us to make ascriptions about intentionality. Although to date there is no consensus about how to define the specific nature of intentionality, scholars do agree that intentionality, in its general form, is an important feature in moral judgments of blame and responsibility (Malle, Moses, & Baldwin, 2001;Mele, 2001;Weiner, 2001). For example, moral psychologist Augusto Blasi (1999) stated one condition of moral action is that "any moral action should be intentional. It cannot be accidentally produced or outside of the agent's consciousness. It must be a result of reasons" (p. 12). When it comes to specific cases, intentionality will have an effect on moral judgment. For instance, philosopher Owen Flanagan (1996) argued that we can differentiate between practical jokes and lies by looking at the intent of the agent. Although in both cases there is a desire to deceive, the goal of a practical joke is to make fun and not cause harm; the intent of an immoral lie is to be selfserving, disregarding of others, and/or to threaten another's well-being. Within the framework of this article, the term intentionality refers to whether people perceive a property of actions as being done purposefully or intentionally, and intention has to do with the actor's mental state (Malle et al., 2001).
In psychology, the theory of mind, attribution theory, and moral development research have produced a body of work relevant to the topic of intentionality and its relationship to outcomes, which is discussed next. Intentionality is believed to be a universal phenomenon and fundamental to how we perceive and interact in the social world (Povinelli, 2001). Although cultures may differ in how they interpret the context of the action or in how they weigh extenuating circumstances (e.g., whether to absolve the agent of responsibility as the result of emotional duress, anger, and immaturity), the evidence suggests that non-Western cultures also use concepts of intentionality to make moral judgments (Ames et al., 2001;Bersoff & Miller, 1993;Chiu & Hong, 1992).
Although there is considerable debate about the extent to which infants and nonhuman primates develop concepts of intentionality (Baird & Baldwin, 2001;Povinelli, 2001;Wellman & Phillips, 2001), studies have found that these concepts start to develop early in life. Beginning with his classic work on children and moral judgments, Jean Piaget (1932Piaget ( /1965 observed that young children used the severity of an outcome as basis for ascribing moral judgment and later transitioned into forming judgments that also included concepts of 12 LARSON intentionality. Today it is believed that children use more than a simple matching rule (e.g., bad outcome = bad intention) and develop an insight about other people's mental states and intentional behavior before the age of 5 (Moses, 2001;Nelson-Le Gall, 1984, 1985. For example, one study (Nelson-Le Gall, 1984) found that 3-year-old children evaluated more harshly a mean agent who causes a foreseeable injury than the same agent who causes an unforseeable injury. This suggests that young children make distinctions between actions that are "done on purpose" from those that are accidental in their moral judgments. Although children younger than 5 appear to have an emerging concept of intentionality, it is not as sophisticated as adult concepts.
Over time, it is speculated that children continue to refine distinctions between intentional and unintentional behavior by developing the ability to conceptualize chance, coordinate the ascription of responsibility to more than one agent (Nelson-Le Gall, 1984), distinguish intention from desire (Moses, 2001;Olthof, Ferguson, & Luiten, 1989), and use context clues to make inferences about intentionality, such as the level of an agent's intention (i.e., how badly the agent wanted to perform an negative action; Jones & Thomson, 2001). In addition, it has been found that through the course of development, we construct a "blame schema," which represents the relationship between blame, intentionality, and consequence (i.e., blame/punish = bad intent + bad outcomes; Hermand, Mullet, Tomera, & Touzart, 2001;Zelazo, Helwig, & Lau, 1996). Thus there appears to be a coherent relationship between bad intent, bad outcomes, moral judgments of blame, and punishment.
As previously mentioned, intentionality influences perceptions of responsibility, blame, and punishment and enables us to distinguish between concepts such as manslaughter and murder. In his theory of social conduct, Weiner (1985) asserted that the perception of whether the act was intentional is essential to making moral judgments. According to Weiner, when an act is intentional, the agent self-initiates the action and pursues the act with foresight and knowledge of its consequences. Consider the following examples: (a) a woman consciously drives above the speed limit and accidentally kills a pedestrian; (b) a woman consciously drives above the speed limit for the purpose of killing a pedestrian. In both cases, the act intentionality is the decision to drive faster than the speed limit. However, what differentiates the two situations is the outcome intent. In the first case, the action and the goal are not connected, because the intention was to drive recklessly but not to kill anyone. In the second scenario, the action is initiated in order to produce the intended outcome (i.e., the woman drives recklessly specifically to kill the pedestrian). According to Weiner (1985), it is likely that both agents will be held responsible, because speeding is internal and controllable. Internal, controllable actions are more likely to be punished, because they are initiated with a certain level of freedom and choice. However, the agent in Scenario B will probably experience a more elevated judgment of responsibility, because the action was used to produce the intended outcome of killing the pedestrian. The varying degrees of responsibility are influenced by whether a controllable act is perceived to result in an intentional outcome. Generally, the theory posits that people are not judged as harshly when negative outcomes are not intended, though it may not reduce the judgments of responsibility to zero. Thus, when there is a disconnect between intention and outcomes, the unforseeability of the negative events often leads to more sympathetic judgments (Lagnado & Channon, 2008).
In addition to the varying degrees of intentionality, there is also evidence that some judgments are affected by the magnitude of the outcomes. In their study, Rai and Holyoak (2010) reexamined the famous trolley dilemma, which asks whether it is okay to sacrifice the life of one person in order to save the lives of five others. The investigators created their own version of the dilemma and presented two groups with a different plan to save the passengers. Group 1 was asked to rate its level of agreement with sacrificing two people to save eight out of 10 passengers, and Group 2 was asked to rate its level of agreement with sacrificing two people to save eight out of 40 passengers. They found that there was a significantly higher level of agreement with the plan to save eight out of 10 passengers. They argued that the proportional magnitude of the outcome mattered in these judgments.
In sum, beliefs about intentionality and outcomes significantly influence moral judgments. In terms of the relationship between intentionality, outcomes, and judgments; the most egregious transgressions appear to be the ones that result in negative outcomes, are under the control of the actor, and are not accompanied by mitigating circumstances.

MORAL JUDGMENT AND DISAGREEMENT: A COGNITIVE PROTOTYPE PERSPECTIVE
Earlier, I argued that new models of moral judgment should include the roles of culture, processing, and conflict. I then provided a review of empirical research, which suggests that social context, autonomy and consent, intentionality, and outcomes are salient features of moral situations. In this section, I argue that the features of moral situations are the abstracted properties of moral prototypes, and I use them to articulate a framework that will include the role of culture, explain why some moral issues require more effortful processing, and describe how the conflict of moral properties, in the form of nonprototypes, could lead to moral disagreement. To begin, it is important to note that within the scope of this project, the term moral judgment refers to the evaluation of interpersonal actions relevant to the moral domain. Moral judgments include perceptions of right and wrong and should not be confused with moral decision making, which concerns the process of making a morally relevant decision as it is influenced by perceptions about what one should or ought to do.
It is conceivable that, over time, moral judgment developed as a mechanism to promote morally desirable actions and thwart morally undesirable actions. As discussed earlier, actions relevant to the moral domain concern issues of justice, welfare, or rights (Nucci, 2001;Turiel, 2002), and moral transgressions, such as breaches of justice, welfare, or rights, have a negative impact on spiritual, physical, or psychological well-being. In a moral system, transgressions are likely to be condemned or punished for the purpose of reducing the frequencies of these occurrences. Conversely, actions that enhance spiritual, physical, or psychological well-being and improve the quality of life are likely to receive moral praise and be encouraged. Aristotle (1999) believed that children undergo the development of practical reason, which is the product of maturation, moral education, and experience. Some researchers echo this belief and depict moral development as influenced by the interaction between humans and their social environment (Kohlberg, 1969;Kohlberg et al., 1983;Nucci, 2001).
As previously mentioned, Shweder (2003aShweder ( , 2003b noted that we construct meaning about what is good, virtuous, and beautiful within a culture. If the personal domain and our informational assumptions are influenced by our social environment, then the social context and consent/autonomy properties are likely to be sensitive to cultural variation. Some researchers 14 LARSON believe that social input has a significant impact on children's development of concepts (see Gelman, 2009, for a review). This seems plausible within the moral domain. As a child interacts with others in her culture, social input may stimulate the processing and refinement of moral concepts. For instance, when a young child says, "That's not fair," after observing that her mother stays up later than she, the child is making a moral observation about the world from a fairness-means-equal-treatment perspective. After her mother articulates an informational assumption that young children require more sleep than older people in order to be healthy, the child is now introduced to the idea that fairness-can-result-in-unequal-treatment. After additional cultural experiences, such as giving the youngest children a head start in a race or allowing for persons in wheelchairs to board a plane before others, the child gradually learns how fairness can mean different things under different contexts as she actively constructs and refines this knowledge within her environment. Thus, throughout development, children probably undergo a process of organizing and reorganizing their moral experiences into mental representations. In addition, it is possible that exchanges like this contribute to a child's beliefs about which actions are subject to social regulation and which fall within the personal domain. In her challenge to stay up late, the child attempts to assert personal agency. As a result of such interpersonal negotiations and the testing of boundaries associated with personal control, the child develops a notion about which actions are subject to choice and autonomy (see Nucci, 2001, for a discussion on the development of the personal domain).
In addition to cultural influences, the ability to process information quickly also has an impact on the development of moral judgments. Because it is more efficient to retrieve memory than to engage in reasoning processes, our brains have evolved to store information and conserve cognitive resources by developing automatic responses to particular stimuli (see Markman & Gentner, 2001, for review). Being able to quickly categorize moral stimuli enables us to react to new situations and make predictions (see Markman & Ross, 2003, for a discussion on concepts and categories). Many skills require repeated practice before they can become automatic or result in quicker response times (Pashler, Johnston, & Ruthruff, 2001). For example, learning to drive a car with a manual transmission requires the driver to think about how and when to shift gears in accordance to a given speed limit and direction. After repeated experiences, shifting gears becomes automatic and requires less deliberate thinking. Training neuronal networks to recognize and quickly process frequently encountered stimuli makes these automatic responses possible. In the moral domain, automated moral judgments are likely to occur as the result of abstracting the properties of moral issues, constructing and refining moral prototypes, and encountering stimuli that closely resemble the prototype. Because the goal of moral judgment is to make an evaluation, it is plausible that within this context, particular features of moral situations, such as intentionality and outcomes, become more salient and relevant to making moral judgments. If we do pay attention to particular features of moral situations, it is likely that they contribute to the formation of two predominant moral prototypes: (a) the promoral prototype and (b) the immoral prototype. Although promoral and immoral prototypes probably share the same fundamental properties, these properties may differ in their orientations.
Within the framework of this model, a prototypical immoral situation is one in which the action (a) as it is interpreted within the social context is not condoned; (b) is intentionally malevolent, self-serving, or disregarding of others; (c) is performed against the will of others; and (d) results in intentional, negative outcomes. In this view, the prototype structure is coherently aligned as each property is directed toward an immoral orientation. The "blame schema" research (e.g., Hermand et al., 2001;Zelazo et al., 1996), which has found that adults often make judgments based on an additive or conjunctive rule (i.e., bad intentions + bad outcomes = punish), supports this claim at a logical level. For example, when an agent intentionally and malevolently assaults a target (i.e., bad intention), we expect the target to experience negative outcomes such as physical harm and psychological distress (i.e., bad outcomes). We do not expect the target to be happy as a result of the beating, nor do we expect the beating to enhance his quality of life in any way; thus a bad intention resulting in a positive outcome would be incoherent. This rule of consistency is also expected to hold true for the promoral prototype, which can be characterized as an action that is (a) supported or considered virtuous within a social context, (b) intentionally benevolent and other-regarding, (c) consensual, and (d) results in intentional positive outcomes. When the stimulus closely resembles either of these prototypes, it is predicted that moral judgment will be quick and automatic. However, when the attributes of the stimulus do not directly correlate with the properties of the prototype (i.e., when the stimulus deviates from the prototype), it is predicted that additional cognitive resources may be required to engage in the process of moral reasoning. Reeder, Vonk, Ronk, Ham, and Lawrence (2004) conducted a study that supports this claim. One of the tasks in their study required the subjects to make judgments about a student's helping behavior. The subjects watched vignettes that depicted three conditions: (a) a no-choice condition in which it was the student's job to perform the helping task, (b) a free-choice condition in which the student offered to perform the helping task, and (c) an ulterior motives condition in which the subject was led to infer that the student performed the helping task for a personal gain. The investigators found that the subjects took significantly longer to make a judgment about the student in the ulterior motive condition. The authors speculate that the participants had to engage in more effortful processing as a result of the incoherence between the inferred, self-interested intentions of the person and the positive outcome of her actions. In this example, the intentions and outcomes properties were at odds, producing a scenario of increasing complexity that deviates from the prototype.
As a result of the increasing complexity of nonprototypical issues, it is plausible that moral reasoning will be required to coordinate the properties and address inherent conflicts, such as assigning more weight to the most salient properties. Hence, situations that deviate from the prototype (e.g., situations that produce competing orientations or introduce additional mitigating circumstances like retaliation) may be more difficult to judge and effortful to process either within an individual or between persons. Due to the nature of nonprototypical issues, it is conceivable that as the complexity increases so will the likelihood of producing heterogeneous responses (i.e., moral disagreement), particularly when none of the outcomes are appealing. In this view, nonprototypical issues will be more prone to moral disagreement because objective, measurable criteria may not be available for resolving the inherent conflicts; thus, choosing between the "lesser of two evils" might involve subjective preferences that weigh particular properties as more salient than others.
Not only does this theoretical framework predict that moral prototypes will lead to quicker judgments, but it also speculates that these prototypes provide us with the ability to make inferences about a given situation. Previous research suggests that people construct default knowledge structures to make quick predictions and judgments about others (see Karniol, 2003, for a review). When little information is available or cognitive resources are low, it is believed that the brain reverts to default representations about persons or situations. Thus, if moral prototypes are constructed from a set of fundamental properties, it is predicted that 16 LARSON inferences about social meaning, intention, consent, and/or outcomes will be made when this information has not been made explicit.
For example, in the Turiel et al. (1991) study, the participants were asked to make judgments about abortion. It was found that a number of positions changed after the participants were asked to consider abortion under specific circumstances such as in choosing the sex of the child or in cases of rape or incest. Because these responses shifted after intentionality was made explicit, we can deduce that the subjects made different intentionality inferences about abortion in its decontextualized form.
In another study (Bersoff & Miller, 1993), participants were asked to make a judgment about the following incident: "A man takes a book from a store without paying for it" (p. 676). Bersoff and Miller used this probe to represent a prototypical moral transgression and reported that nearly all participants categorized this probe in moral terms (i.e., as a theft). This probe is interesting because neither the intentionality nor the consent orientations were made explicit; thus, it is likely that the participants relied on default representations to make inferences about these properties.
To further develop an understanding of the difference between moral prototypes and issues that deviate from the prototype, it is important to consider this model in context. The following example depicts a promoral prototype: Sam cares about homeless people. He recently offered a homeless man a full-time job on his farm. The man eagerly accepted it. By offering the man a job, Sam ended the man's suffering, and the man is now happy and no longer homeless.
In this situation, the properties of the prototype are coherently directed toward a promoral orientation. In our social context, we support assisting people in need and when adults work for a living. Sam's intentions are benevolent and other-regarding. The man consents to the job offer, and in the end he is happy and no longer homeless and suffering. It is predicted that such a scenario would be easy to judge and garner moral praise and agreement, because it resembles a promoral prototype.
Conversely, the properties of an immoral prototype would take on immoral stances. An example of such a prototype is as follows: Sam hates homeless people. He abducted a homeless man and forced him to work full-time on his farm as a source of free labor. The man hates his job and doesn't want to be on the farm. The man has experienced psychological and physical harm from being forced to work.
In this scenario, each of the orientations is directed toward an immoral stance. In our social context, Americans support neither the abduction of people nor slavery. In addition, Sam is intentionally self-serving, the homeless man does not consent to his actions, and there is psychological and physical harm. Because this scenario shares the same properties as an immoral prototype, it is predicted that most Americans would judge this as wrong and there would be a high degree of agreement.
In the next example, there is conflict among the properties. Now the scenario deviates from the prototype: Sam cares about homeless children. He recently offered a homeless 6-year-old boy a full-time job on his farm. The boy eagerly accepted it. By offering the boy a job, Sam ended the boy's suffering, and the boy is now happy and no longer homeless.

COGNITIVE PROTOTYPE MODEL OF MORAL JUDGMENT
Several properties in this scenario become incoherent, which creates a nonprototype. In American society, helping homeless children is generally viewed as a good thing, but we also have laws that emanate from a belief that young children should be in school and not at work; thus, there is a social context conflict. In addition, even though Sam has good intentions and the outcomes are positive, some Americans might argue that the boy is not old enough to fully understand the situation, which would challenge the validity of his consent. Others may even infer negative outcomes that could result from the boy not being in school. A nonprototypical scenario such as this might require one to use additional cognitive resources to weigh the competing orientations in order to form a conclusion; thus, the scenario's increased complexity may make it more difficult to judge than a moral prototype. Such a situation may result in a greater degree of moral disagreement if each person does not reason about the conflict in the exact same way.
Reverting back to the real-life dilemma of the Terri Schiavo case, which produced a high level of moral disagreement, we also can depict this public debate as one that is plagued by layers of incoherent properties. Not only were several of the orientations of the properties unclear and left to personal interpretation and speculation, such as not having a record of Terri's wishes (i.e., whether there was true consent), but the public was also divided in how they conceptualized persons in a vegetative state, which resulted in a social context conflict. Even though Michael Schiavo stated that he took Terri off life support in order to honor her wishes, there were broad speculations about whether this was his true intention. Finally, in the end, there were the competing potential outcomes of ending an insufferable life versus doing everything possible to save a life.
It is important to note that there may be times when people believe a situation to be wrong even if there are no observable negative consequences. For instance, Haidt, Koller, and Dias (1993) found that people continued to see particular situations in moral terms even after the situation was rendered as harmless. Furthermore, the participants of the study often lacked the capacity to explain why. Haidt (2001) referred to this phenomenon as "moral dumbfounding" and claimed that these moral responses provide evidence that moral judgments are more intuitive and less rational. In a description of his social intuitionist model, Haidt used the following scenario as an example: Julie and Mark are brother and sister. They are traveling together in France on summer vacation from college. One night they are staying alone in a cabin near the beach. They decide that it would be interesting and fun if they tried to make love. At the very least it would be a new experience for each of them. Julie was already taking birth control pills, but Mark uses a condom too, just to be safe. They both enjoy making love, but they decide not to do it again. They keep that night as their special secret, which makes them feel even closer to each other. (p. 814) According to Haidt, most of the participants who responded to this probe (though, importantly, not all) say that it was wrong for Mark and Julie to make love despite the scenario's design to strip away any potential for harm. Of interest, in Haidt's attempt to control for harm, he inadvertently manipulated the properties of the moral issue. In this instance, the couple did not act out of ill intentions, the action was consensual, and there were no observable negative outcomes. Most of the subjects wanted to object to Mark's and Julie's actions but were unable to justify their responses. Haidt (2001) used the phenomenon of moral dumbfounding to support his social intuitionist model of moral judgment. However, an alternative explanation is that particular 18 LARSON concepts or actions, as they are constructed and understood within a social context, are loaded with moral meaning and cannot be completely dissociated from harm. This was made evident when Shweder et al. (1987) looked at how some actions, such as a widow wearing bright clothing, were considered to be morally wrong by the Brahmans and perceived as a social convention among Americans. For the Brahmans, this action, as it is conceptualized and made meaningful within their social context, is associated with negative outcomes in the form of producing a threat to the spiritual well-being of the deceased husband. In another example , a significant number of children continued to view eating chicken with bare hands as a health risk and not merely as a social convention despite the investigators' efforts to minimize the possible threat of disease transfer by emphasizing the sanitary precautions of food handlers. The investigators explain that the children in the study lived in northeastern Brazil, where there was a serious cholera epidemic. In this social context, eating chicken with bare hands was perceived to be a threat to their well-being, even after the probe suggested that it was safe.
Thus, inherent to some concepts (such as incest, rape, slavery, or oppression) is a representation of threat to well-being, which is intertwined into and inseparable from their constructed meanings. In other words, we cannot ask someone to imagine a benign form of "cancer," because the term cancer, as it is applied within our social context, is a disease that threatens life. Likewise, some of the participants in Haidt's study may not have been able to conceive of a benign form of incest, because the concept includes a notion of harm. To strip it of its associations with negative consequences is to change its meaning so that it no longer resembles its original concept. Thus, it is plausible that within a social context, some concepts convey a conceptual threat in which there is a perceived potential for harm inherent in the concept's constructed meaning. In other words, there is a belief that such a concept, when enacted, poses a threat to physical, psychological, and/or spiritual well-being and can affect the quality of life. As a result, some people may judge a nonprototypical situation as wrong even when there is good intentionality, full consent, and positive outcomes based on how they interpret the meaning of the action within a given social context. This is why the role of social context is an important feature to the cognitive prototype model. However, the social context property in itself is not the only salient factor in making moral judgments (as indicated by the fact that not every participant found Haidt's incest probe to be immoral). In some cases, people may weigh the intentions, consent, or outcomes properties more heavily. Thus, a scenario with a social context conflict can be classified as a nonprototype. As with other nonprototypical issues, such a conflict is likely to produce moral disagreement as a result of its incoherent properties.

FUTURE TESTING OF THE COGNITIVE PROTOTYPE MODEL
In sum, the cognitive prototype model posits that, over time and as a result of development, training/cultural influence, and experience, people construct notions of morally blameworthy and morally praiseworthy behavior by abstracting out salient properties of moral situations that lead to an ideal representation of each. These properties include (a) social context meaning, (b) intentionality, (c) claims for consent and autonomy, and (d) outcomes. The properties of immoral prototypes are uniformly directed toward immoral orientations and the properties of promoral prototypes are uniformly directed toward promoral orientations. Nonprototypical issues deviate COGNITIVE PROTOTYPE MODEL OF MORAL JUDGMENT from the prototype and often represent a structure in which there are competing orientations or unusual circumstances.
Within this framework, several hypotheses can be formulated. First, if people construct moral prototypes, then prototypical moral situations should lead to widespread agreement. In other words, a very high percentage of people within a given group will likely judge prototypes as right or wrong, relative to whether the prototype is promoral or immoral. This is because the stimuli would be highly correlative with default, prototypical representations of a moral category and be classified as a member of that category. In addition, the uniform orientations of the properties would not create conflict within or between persons. Because moral prototypes lack conflict or unusual circumstances, these evaluations should fall at the extreme ends of a right/ wrong continuum and should result in faster response times than nonprototypes. Second, because nonprototypes deviate from moral prototypes, it is predicted that they would act as a source of moral disagreement due to their unusual circumstances or conflicts. The greater the nonprototypes deviate from the prototypes, the more cognitive effort it may require to process a judgment.
Preliminary studies have been performed to test some of these hypotheses (Larson, 2013(Larson, , 2015. In these experiments, participants (varying by gender, race, age, ethnicity, education, and income) judged scenarios that were structured like moral prototypes and nonprototypes. Judgments of the prototypical scenarios yielded perfect or near perfect agreement, whereas nonprototypical scenarios often resulted in varying moral judgments (i.e., moral disagreement). In addition, recent cognitive research in social domain theory supports the idea that moral prototypes may lead to faster response times than nonprototypes (Lahat, Helwig, & Zelazo, 2012).

CONCLUSION AND IMPLICATIONS
The cognitive prototype model asserts that individuals interpret moral situations using informational assumptions and knowledge constructed within a particular culture. In addition to the meaning they derive from the situation, the intentions of the agent, consent/autonomy issues, and the outcomes of the action also have a bearing on their moral evaluations. When these properties are harmonious and uniformly directed toward an immoral or promoral stance, a high level of agreement and quick, effortless judgments will result. When the situation deviates from the prototype, the nonprototype can act as a source of moral disagreement and, perhaps, lead to more effortful processing, depending on how far the nonprototype deviates from the prototype.
If hypotheses associated with the cognitive prototype model are supported after further testing, the results have the potential to be useful for multiple areas of study including psychology, philosophy, moral education, and law. There are also two other implications worth noting.

Moral Calculus
The results of the preliminary studies (Larson, 2013(Larson, , 2015 suggest that there is high degree of agreement among prototypical situations and a greater likelihood for disagreement in nonprototypical situations; none of the nonprototypes was rated as significantly more 20 LARSON wrong or right than immoral prototypes or promoral prototypes, respectively; immoral properties appear to weigh more heavily on judgments than promoral properties; and some combinations of properties can produce additive effects. Previous research supports some of these initial findings. For example, Lagnado and Channon (2008) found that intentional actions were rated as more blameworthy than unintentional actions when they resulted in the same outcomes. Thus, the level of intentionality affected judgment. Rai and Holyoak (2010) found that the proportional magnitude of the outcomes influenced judgments. Previous research has also found that there is an asymmetry in judgment between positive and negative actions, which suggests that risks carry more judgment weight than benefits (Kahneman & Tversky, 1979;Malle & Bennett, 2002;Ohtsubo, 2007;Pizarro & Bloom, 2003). Thus, there appears to be predictable and quantifiable aspects of moral judgment, which could lead to the development of a moral judgment calculus. In the same way that economic models are used to forecast trends or decision models are used to predict choices, a moral judgment model could be used to predict the anticipated level of agreement/disagreement of a moral situation, resulting in the ability to predict public sentiment about controversial issues.

Policymaking
Understanding the impact that nonprototypical situations can have on judgment could lead to stronger policy development. Sometimes, laws, policies, and procedures are drawn from prototypical situations. For example, the zero-tolerance practices in the 1990s punished students who brought weapons or drugs to school. As a result, some schools prevented students from bringing plastic butter knives and inhalers (Associated Press, 2007). Because zero tolerance practices were not flexible enough to account for nonprototypical circumstances, some schools were subject to harsh criticism and even lawsuits (Cauchon, 1999). Understanding the nuances between prototypical and nonprototypical situations and how the social context, intentionality, consent/autonomy, and outcome variables have an impact on judgment could lead policymakers into drafting stronger, more flexible practices.