top of page
0righteousmind.png

The Righteous Mind:

Why Good People Are Divided By Politics and Religion

by Jonathan Haidt

After finishing this book in February of 2022, I wrote,

 

"Haidt opened my eyes to understanding how our evolutionary past was a major factor in turning us into righteous beings, as well as how certain triggers for righteousness have evolved and infected us in different ways. Highly recommended. Although he doesn't ever question the need for righteousness itself in today's world, very few do."

 

My clippings below collapse a 530-page book into 14 pages, measured by using 12-point type in Microsoft Word.

​

See all my book recommendations.  

​

Here are the selections I made:

​

Turiel’s account of moral development differed in many ways from Kohlberg’s, but the political implications were similar: morality is about treating individuals well. It’s about harm and fairness (not loyalty, respect, duty, piety, patriotism, or tradition).

 

Most societies have chosen the sociocentric answer, placing the needs of groups and institutions first, and subordinating the needs of individuals. In contrast, the individualistic answer places individuals at the center and makes society a servant of the individual.

 

The sociocentric answer dominated most of the ancient world, but the individualistic answer became a powerful rival during the Enlightenment. The individualistic answer largely vanquished the sociocentric approach in the twentieth century as individual rights expanded rapidly, consumer culture spread, and the Western world reacted with horror to the evils perpetrated by the ultrasociocentric fascist and communist empires. (European nations with strong social safety nets are not sociocentric on this definition. They just do a very good job of protecting individuals from the vicissitudes of life.) 

 

Actions that Indians and Americans agreed were wrong: • While walking, a man saw a dog sleeping on the road. He walked up to it and kicked it. • A father said to his son, “If you do well on the exam, I will buy you a pen.” The son did well on the exam, but the father did not give him anything. Actions that Americans said were wrong but Indians said were acceptable: • A young married woman went alone to see a movie without informing her husband. When she returned home her husband said, “If you do it again, I will beat you black and blue.” She did it again; he beat her black and blue. (Judge the husband.) • A man had a married son and a married daughter. After his death his son claimed most of the property. His daughter got little. (Judge the son.) Actions that Indians said were wrong but Americans said were acceptable: • In a family, a twenty-five-year-old son addresses his father by his first name. • A woman cooked rice and wanted to eat with her husband and his elder brother. Then she ate with them. (Judge the woman.) • A widow in your community eats fish two or three times a week. • After defecation a woman did not change her clothes before cooking.

 

had found evidence for Hume’s claim. I had found that moral reasoning was often a servant of moral emotions, and this was a challenge to the rationalist approach that dominated moral psychology.

 

concluded instead that: • The moral domain varies by culture. It is unusually narrow in Western, educated, and individualistic cultures. Sociocentric cultures broaden the moral domain to encompass and regulate more aspects of life. • People sometimes have gut feelings—particularly about disgust and disrespect—that can drive their reasoning. Moral reasoning is sometimes a post hoc fabrication. • Morality can’t be entirely self-constructed by children based on their growing understanding of harm. Cultural learning or guidance must play a larger role than rationalist theories had given it.

 

One of the greatest truths in psychology is that the mind is divided into parts that sometimes conflict.1 To be human is to feel pulled in different directions, and to marvel—sometimes in horror—at your inability to control your own actions.

 

Western philosophy has been worshipping reason and distrusting the passions for thousands of years.4 There’s a direct line running from Plato through Immanuel Kant to Lawrence Kohlberg. I’ll refer to this worshipful attitude throughout this book as the rationalist delusion. I call it a delusion because when a group of people make something sacred, the members of the cult lose the ability to think clearly about it. Morality binds and blinds. The true believers produce pious fantasies that don’t match reality, and at some point somebody comes along to knock the idol off its pedestal. That was Hume’s project, with his philosophically sacrilegious claim that reason was nothing but the servant of the passions. 

 

We do moral reasoning not to reconstruct the actual reasons why we ourselves came to a judgment; we reason to find the best possible reasons why somebody else ought to join us in our judgment.

 

In this chapter I tried to show that Hume was right: • The mind is divided into parts, like a rider (controlled processes) on an elephant (automatic processes). The rider evolved to serve the elephant. • You can see the rider serving the elephant when people are morally dumbfounded. They have strong gut feelings about what is right and wrong, and they struggle to construct post hoc justifications for those feelings. Even when the servant (reasoning) comes back empty-handed, the master (intuition) doesn’t change his judgment. • The social intuitionist model starts with Hume’s model and makes it more social. Moral reasoning is part of our lifelong struggle to win friends and influence people. That’s why I say that “intuitions come first, strategic reasoning second.” You’ll misunderstand moral reasoning if you think about it as something people do by themselves in order to figure out the truth. • Therefore, if you want to change someone’s mind about a moral or political issue, talk to the elephant first. If you ask people to believe something that violates their intuitions, they will devote their efforts to finding an escape hatch—a reason to doubt your argument or conclusion. They will almost always succeed.

 

Here’s the same idea from Buddha: It is easy to see the faults of others, but difficult to see one’s own faults. One shows the faults of others like chaff winnowed in the wind, but one conceals one’s own faults as a cunning gambler conceals his dice.

 

Every emotion (such as happiness or disgust) includes an affective reaction, but most of our affective reactions are too fleeting to be called emotions (for example, the subtle feelings you get just from reading the words happiness and disgust).

 

He showed people the pairs of photographs from each contest with no information about political party, and he asked them to pick which person seemed more competent. He found that the candidate that people judged more competent was the one who actually won the race about two-thirds of the time.

 

Zhong has also shown the reverse process: immorality makes people want to get clean. People who are asked to recall their own moral transgressions, or merely to copy by hand an account of someone else’s moral transgression, find themselves thinking about cleanliness more often, and wanting more strongly to cleanse themselves.26 They are more likely to select hand wipes and other cleaning products when given a choice of consumer products to take home with them after the experiment. Zhong calls this the Macbeth effect, named for Lady Macbeth’s obsession with water and cleansing after she goads her husband into murdering King Duncan. (She goes from “A little water clears us of this deed” to “Out, damn’d spot! out, I say!”) 

 

In other words, under normal circumstances the rider takes its cue from the elephant, just as a lawyer takes instructions from a client. But if you force the two to sit around and chat for a few minutes, the elephant actually opens up to advice from the rider and arguments from outside sources. Intuitions come first, and under normal circumstances they cause us to engage in socially strategic reasoning, but there are ways to make the relationship more of a two-way street.

 

IN SUM The first principle of moral psychology is Intuitions come first, strategic reasoning second. In support of this principle, I reviewed six areas of experimental research demonstrating that: • Brains evaluate instantly and constantly (as Wundt and Zajonc said). • Social and political judgments depend heavily on quick intuitive flashes (as Todorov and work with the IAT have shown). • Our bodily states sometimes influence our moral judgments. Bad smells and tastes can make people more judgmental (as can anything that makes people think about purity and cleanliness). • Psychopaths reason but don’t feel (and are severely deficient morally). • Babies feel but don’t reason (and have the beginnings of morality). • Affective reactions are in the right place at the right time in the brain (as shown by Damasio, Greene, and a wave of more recent studies).

 

Why do we have this weird mental architecture? As hominid brains tripled in size over the last 5 million years, developing language and a vastly improved ability to reason, why did we evolve an inner lawyer, rather than an inner judge or scientist? Wouldn’t it have been most adaptive for our ancestors to figure out the truth, the real truth about who did what and why, rather than using all that brainpower just to find evidence in support of what they wanted to believe? That depends on which you think was more important for our ancestors’ survival: truth or reputation.

 

Exploratory thought is an “evenhanded consideration of alternative points of view.” Confirmatory thought is “a one-sided attempt to rationalize a particular point of view.”

 

Accountability increases exploratory thought only when three conditions apply: (1) decision makers learn before forming any opinion that they will be accountable to an audience, (2) the audience’s views are unknown, and (3) they believe the audience is well informed and interested in accuracy.

 

When all three conditions apply, people do their darnedest to figure out the truth, because that’s what the audience wants to hear. But the rest of the time—which is almost all of the time—accountability pressures simply increase confirmatory ...

This highlight has been truncated due to consecutive passage length restrictions.

 

A central function of thought is making sure that one acts in ways that can be persuasively justified or excused to others.

 

Tetlock concludes that conscious reasoning is carried out largely for the purpose of persuasion, rather than discovery. But Tetlock adds that we are also trying to persuade ourselves. We want to believe the things we are about to say to others.

 

The findings get more disturbing. Perkins found that IQ was by far the biggest predictor of how well people argued, but it predicted only the number of my-side arguments. Smart people make really good lawyers and press secretaries, but they are no better than others at finding reasons on the other side. Perkins concluded that “people invest their IQ in buttressing their own case rather than in exploring the entire issue more fully and evenhandedly.”

 

This is how the press secretary works on trivial issues where there is no motivation to support one side or the other. If thinking is confirmatory rather than exploratory in these dry and easy cases, then what chance is there that people will think in an open-minded, exploratory way when self-interest, social identity, and strong emotions make them want or even need to reach a preordained conclusion?

 

WE LIE, CHEAT, AND JUSTIFY SO WELL THAT WE HONESTLY BELIEVE WE ARE HONEST

 

Many psychologists have studied the effects of having “plausible deniability.” In one such study, subjects performed a task and were then given a slip of paper and a verbal confirmation of how much they were to be paid. But when they took the slip to another room to get their money, the cashier misread one digit and handed them too much money. Only 20 percent spoke up and corrected the mistake.

 

But the story changed when the cashier asked them if the payment was correct. In that case, 60 percent said no and returned the extra money. Being asked directly removes plausible deniability; it would take a direct lie to keep the money. As a result, people are three times more likely to be honest.

 

The difference between a mind asking “Must I believe it?” versus “Can I believe it?” is so profound that it even influences visual perception. Subjects who thought that they’d get something good if a computer flashed up a letter rather than a number were more likely to see the ambiguous figure as the letter B, rather than as the number 13.

 

If people can literally see what they want to see—given a bit of ambiguity—is it any wonder that scientific studies often fail to persuade the general public?

 

Scientists are really good at finding flaws in studies that contradict their own views, but it sometimes happens that evidence accumulates across many studies to the point where scientists must change their minds. I’ve seen this happen in my colleagues (and myself) many times,34 and it’s part of the accountability system of science—you’d look foolish clinging to discredited theories. But for nonscientists, there is no such thing as a study you must believe. It’s always possible to question the methods, find an alternative interpretation of the data, or, if all else fails, question the honesty or ideology of the researchers.

 

And now that we all have access to search engines on our cell phones, we can call up a team of supportive scientists for almost any conclusion twenty-four hours a day. Whatever you want to believe about the causes of global warming or whether a fetus can feel pain, just Google your belief. You’ll find partisan websites summarizing and sometimes distorting relevant scientif...

This highlight has been truncated due to consecutive passage length restrictions.

 

WE CAN BELIEVE ALMOST ANYTHING THAT SUPPORTS OUR TEAM Many political scientists used to assume that people vote selfishly, choosing the candidate or policy that will benefit them the most. But decades of research on public opinion have led to the conclusion that self-interest is a weak predictor of policy preferences. Parents of children in public school are not more supportive of government aid to schools than other citizens; young men subject to the draft are not more opposed to military escalation than men too old to be drafted; and people who lack health insurance are not more likely to support government-issued health insurance than people covered by insurance.35 Rather, people care about their groups, whether those be racial, regional, religious, or political. The political scientist Don Kinder summarizes the findings like this: “In matters of public opinion, citizens seem to be asking themselves not ‘What’s in it for me?’ but rather ‘What’s in it for my group?

 

Liberals and conservatives actually move further apart when they read about research on whether the death penalty deters crime, or when they rate the quality of arguments made by candidates in a presidential debate, or when they evaluate arguments about affirmative action or gun control.

 

Westen was actually pitting two models of the mind against each other. Would subjects reveal Jefferson’s dual-process model, in which the head (the reasoning parts of the brain) processes information about contradictions equally for all targets, but then gets overruled by a stronger response from the heart (the emotion areas)? Or does the partisan brain work as Hume says, with emotional and intuitive processes running the show and only putting in a call to reasoning when its services are needed to justify a desired conclusion?

 

As they put it, “skilled arguers … are not after the truth but after arguments supporting their views.”50 This explains why the confirmation bias is so powerful, and so ineradicable. How hard could it be to teach students to look on the other side, to look for evidence against their favored view? Yet, in fact, it’s very hard, and nobody has yet found a way to do it.51 It’s hard because the confirmation bias is a built-in feature (of an argumentative mind), not a bug that can be removed (from a platonic mind).

 

I’m not saying we should all stop reasoning and go with our gut feelings. Gut feelings are sometimes better guides than reasoning for making consumer choices and interpersonal judgments,52 but they are often disastrous as a basis for public policy, science, and law.53 Rather, what I’m saying is that we must be wary of any individual’s ability to reason.

 

I’m not saying we should all stop reasoning and go with our gut feelings. Gut feelings are sometimes better guides than reasoning for making consumer choices and interpersonal judgments,52 but they are often disastrous as a basis for public policy, science, and law.53 Rather, what I’m saying is that we must be wary of any individual’s ability to reason.

 

If you want to make people behave more ethically, there are two ways you can go. You can change the elephant, which takes a long time and is hard to do. Or, to borrow an idea from the book Switch, by Chip Heath and Dan Heath,54 you can change the path that the elephant and rider find themselves traveling on. You can make minor and inexpensive tweaks to the environment, which can produce big increases in ethical behavior.

 

The first principle of moral psychology is Intuitions come first, strategic reasoning second. To demonstrate the strategic functions of moral reasoning, I reviewed five areas of research showing that moral thinking is more like a politician searching for votes than a scientist searching for truth: • We are obsessively concerned about what others think of us, although much of the concern is unconscious and invisible to us. • Conscious reasoning functions like a press secretary who automatically justifies any position taken by the president. • With the help of our press secretary, we are able to lie and cheat often, and then cover it up so effectively that we convince even ourselves. • Reasoning can take us to almost any conclusion we want to reach, because we ask “Can I believe it?” when we want to believe something, but “Must I believe it?” when we don’t want to believe. The answer is almost always yes to the first question and no to the second. • In moral and political matters we are often groupish, rather than selfish. We deploy our reasoning skills to support our team, and to demonstrate commitment to our team. 

 

The Penn students spoke almost exclusively in the language of the ethic of autonomy, whereas the other groups (particularly the working-class groups) made much more use of the ethic of community, and a bit more use of the ethic of divinity.

 

I described research showing that people who grow up in Western, educated, industrial, rich, and democratic (WEIRD) societies are statistical outliers on many psychological measures, including measures of moral psychology. I also showed that: • The WEIRDer you are, the more you perceive a world full of separate objects, rather than relationships. • Moral pluralism is true descriptively. As a simple matter of anthropological fact, the moral domain varies across cultures. • The moral domain is unusually narrow in WEIRD cultures, where it is largely limited to the ethic of autonomy (i.e., moral concerns about individuals harming, oppressing, or cheating other individuals). It is broader—including the ethics of community and divinity—in most other societies, and within religious and conservative moral matrices within WEIRD societies. • Moral matrices bind people together and blind them to the coherence, or even existence, of other matrices. This makes it very difficult for people to consider the possibility that there might really be more than one form of moral truth, or more than one valid framework for judging people or running a society. In the next three chapters I’ll catalogue the moral intuitions, showing exactly what else there is beyond harm and fairness. I’ll show how a small set of innate and universal moral foundations can be used to construct a great variety of moral matrices. I’ll offer tools you can use to understand moral arguments emanating from matrices that are not your own. 

 

Cuteness primes us to care, nurture, protect, and interact.

 

It makes no evolutionary sense for you to care about what happens to my son Max, or a hungry child in a faraway country, or a baby seal. But Darwin doesn’t have to explain why you shed any particular tear. He just has to explain why you have tear ducts in the first place, and why those ducts can sometimes be activated by suffering that is not your own.8 Darwin must explain the original triggers of each module. The current triggers can change rapidly. We care about violence toward many more classes of victims today than our grandparents did in their time.

 

Bumper stickers are often tribal badges; they advertise the teams we support, including sports teams, universities, and rock bands.

 

The moral matrix of liberals, in America and elsewhere, rests more heavily on the Care foundation than do the matrices of conservatives, and this driver has selected three bumper stickers urging people to protect innocent victims.

 

It was harder to find bumper stickers related to compassion for conservatives, but the “wounded warrior” car is an example. This driver is also trying to get you to care, but conservative caring is somewhat different—it is aimed not at animals or at people in other countries but at those who’ve sacrificed for the group.12 It is not universalist; it is more local, and blended with loyalty.

 

Far worse than lust, gluttony, violence, or even heresy is the betrayal of one’s family, team, or nation.

 

The current triggers of the Authority/subversion foundation, therefore, include anything that is construed as an act of obedience, disobedience, respect, disrespect, submission, or rebellion, with regard to authorities perceived to be legitimate.

 

Current triggers also include acts that are seen to subvert the traditions, institutions, or values that are perceived to provide stability.

 

The original adaptive challenge that drove the evolution of the Sanctity foundation, therefore, was the need to avoid pathogens, parasites, and other threats that spread by physical touch or proximity.

 

The Care/harm foundation evolved in response to the adaptive challenge of caring for vulnerable children. It makes us sensitive to signs of suffering and need; it makes us despise cruelty and want to care for those who are suffering. • The Fairness/cheating foundation evolved in response to the adaptive challenge of reaping the rewards of cooperation without getting exploited. It makes us sensitive to indications that another person is likely to be a good (or bad) partner for collaboration and reciprocal altruism. It makes us want to shun or punish cheaters. • The Loyalty/betrayal foundation evolved in response to the adaptive challenge of forming and maintaining coalitions. It makes us sensitive to signs that another person is (or is not) a team player. It makes us trust and reward such people, and it makes us want to hurt, ostracize, or even kill those who betray us or our group. • The Authority/subversion foundation evolved in response to the adaptive challenge of forging relationships that will benefit us within social hierarchies. It makes us sensitive to signs of rank or status, and to signs that other people are (or are not) behaving properly, given their position. • The Sanctity/degradation foundation evolved initially in response to the adaptive challenge of the omnivore’s dilemma, and then to the broader challenge of living in a world of pathogens and parasites. It includes the behavioral immune system, which can make us wary of a diverse array of symbolic objects and threats. It makes it possible for people to invest objects with irrational and extreme values—both positive and negative—which are important for binding groups together.

 

Liberals have a three-foundation morality, whereas conservatives use all six. Liberal moral matrices rest on the Care/harm, Liberty/oppression, and Fairness/cheating foundations, although liberals are often willing to trade away fairness (as proportionality) when it conflicts with compassion or with their desire to fight oppression. Conservative morality rests on all six foundations, although conservatives are more willing than liberals to sacrifice Care and let some people get hurt in order to achieve their many other moral objectives.

 

We added the Liberty/oppression foundation, which makes people notice and resent any sign of attempted domination. It triggers an urge to band together to resist or overthrow bullies and tyrants. This foundation supports the egalitarianism and antiauthoritarianism of the left, as well as the don’t-tread-on-me and give-me-liberty antigovernment anger of libertarians and some conservatives. • We modified the Fairness foundation to make it focus more strongly on proportionality. The Fairness foundation begins with the psychology of reciprocal altruism, but its duties expanded once humans created gossiping and punitive moral communities. Most people have a deep intuitive concern for the law of karma—they want to see cheaters punished and good citizens rewarded in proportion to their deeds.

 

Exhibit A: Major transitions produce superorganisms. The history of life on Earth shows repeated examples of “major transitions.” When the free rider problem is muted at one level of the biological hierarchy, larger and more powerful vehicles (superorganisms) arise at the next level up in the hierarchy, with new properties such as a division of labor, cooperation, and altruism within the group. Exhibit B: Shared intentionality generates moral matrices. The Rubicon crossing that let our ancestors function so well in their groups was the emergence of the uniquely human ability to share intentions and other mental representations. This ability enabled early humans to collaborate, divide labor, and develop shared norms for judging each other’s behavior. These shared norms were the beginning of the moral matrices that govern our social lives today. Exhibit C: Genes and cultures coevolve. Once our ancestors crossed the Rubicon and began to share intentions, our evolution became a two-stranded affair. People created new customs, norms, and institutions that altered the degree to which many groupish traits were adaptive. In particular, gene-culture coevolution gave us a set of tribal instincts: we love to mark group membership, and then we cooperate preferentially with members of our group. Exhibit D: Evolution can be fast. Human evolution did not stop or slow down 50,000 years ago. It sped up. Gene-culture coevolution reached a fever pitch during the last 12,000 years. We can’t just examine modern-day hunter-gatherers and assume that they represent universal human nature as it was locked into place 50,000 years ago. Periods of massive environmental change (as occurred between 70,000 and 140,000 years ago) and cultural change (as occurred during the Holocene era) should figure more prominently in our attempts to understand who we are, and how we got our righteous minds.

 

Oxytocin simply makes people love their in-group more. It makes them parochial altruists.

 

The second candidate for sustaining within-group coordination is the mirror neuron system.

 

We are more likely to mirror and then empathize with others when they have conformed to our moral matrix than when they have violated it.

 

[Our movement rejects the view of man] as an individual, standing by himself, self-centered, subject to natural law, which instinctively urges him toward a life of selfish momentary pleasure; it sees not only the individual but the nation and the country; individuals and generations bound together by a moral law, with common traditions and a mission which, suppressing the instinct for life closed in a brief circle of pleasure, builds up a higher life, founded on duty, a life free from the limitations of time and space, in which the individual, by self-sacrifice, the renunciation of self-interest … can achieve that purely spiritual existence in which his value as a man consists. Inspiring stuff, until you learn that it’s from The Doctrine of Fascism, by Benito Mussolini. 

 

Believing, doing, and belonging are three complementary yet distinct aspects of religiosity, according to many scholars.12 When you look at all three aspects at the same time, you get a view of the psychology of religion that’s very different from the view of the New Atheists.

 

Often our beliefs are post hoc constructions designed to justify what we’ve just done, or to support the groups we belong to.

 

Sacredness binds people together, and then blinds them to the arbitrariness of the practice.

 

 

Many scholars have talked about this interaction of God, trust, and trade. In the ancient world, temples often served an important commercial function: oaths were sworn and contracts signed before the deity, with explicit threats of supernatural punishment for abrogation.55 In the medieval world, Jews and Muslims excelled in long-distance trade in part because their religions helped them create trustworthy relationships and enforceable contracts.

 

Putnam and Campbell reject the New Atheist emphasis on belief and reach a conclusion straight out of Durkheim: “It is religious belongingness that matters for neighborliness, not religious believing.”

 

Religion is therefore well suited to be the handmaiden of groupishness, tribalism, and nationalism.

 

Anything that binds people together into a moral matrix that glorifies the in-group while at the same time demonizing another group can lead to moralistic killing, and many religions are well suited for that task. Religion is therefore often an accessory to atrocity, rather than the driving force of the atrocity.

 

We are Homo duplex; we are 90 percent chimp and 10 percent bee. Successful religions work on both levels of our nature to suppress selfishness, or at least to channel it in ways that often pay dividends for the group.

 

Moral systems are interlocking sets of values, virtues, norms, practices, identities, institutions, technologies, and evolved psychological mechanisms that work together to suppress or regulate self-interest and make cooperative societies possible.

 

My definition of morality was designed to be a descriptive definition; it cannot stand alone as a normative definition.

 

I don’t know what the best normative ethical theory is for individuals in their private lives.68 But when we talk about making laws and implementing public policies in Western democracies that contain some degree of ethnic and moral diversity, then I think there is no compelling alternative to utilitarianism.69 I think Jeremy Bentham was right that laws and public policies should aim, as a first approximation, to produce the greatest total good.70 I just want Bentham to read Durkheim and recognize that we are Homo duplex before he tells any of us, or our legislators, how to go about maximizing that total good.

 

If you think about religion as a set of beliefs about supernatural agents, you’re bound to misunderstand it. You’ll see those beliefs as foolish delusions, perhaps even as parasites that exploit our brains for their own benefit. But if you take a Durkheimian approach to religion (focusing on belonging) and a Darwinian approach to morality (involving multilevel selection), you get a very different picture. You see that religious practices have been binding our ancestors into groups for tens of thousands of years.

 

Our ability to believe in supernatural agents may well have begun as an accidental by-product of a hypersensitive agency detection device, but once early humans began believing in such agents, the groups that used them to construct moral communities were the ones that lasted and prospered.

 

We humans have an extraordinary ability to care about things beyond ourselves, to circle around those things with other people, and in the process to bind ourselves into teams that can pursue larger projects.

 

Morality binds and blinds, and to understand the mess we’re in, we’ve got to understand why some people bind themselves to the liberal team, some to the conservative team, some to other teams or to no team at all.

 

Whether you end up on the right or the left of the political spectrum turns out to be just as heritable as most other traits: genetics explains between a third and a half of the variability among people on their political attitudes.14 Being raised in a liberal or conservative household accounts for much less.

 

Innate does not mean unmalleable; it means organized in advance of experience. The genes guide the construction of the brain in the uterus, but that’s only the first draft, so to speak. The draft gets revised by childhood experiences.

 

After analyzing the DNA of 13,000 Australians, scientists recently found several genes that differed between liberals and conservatives.15 Most of them related to neurotransmitter functioning, particularly glutamate and serotonin, both of which are involved in the brain’s response to threat and fear. This finding fits well with many studies showing that conservatives react more strongly than liberals to signs of danger, including the threat of germs and contamination, and even low-level threats such as sudden blasts of white noise.16 Other studies have implicated genes related to receptors for the neurotransmitter dopamine, which has long been tied to sensation-seeking and openness to experience, which are among the best-established correlates of liberalism.17 As the Renaissance writer Michel de Montaigne said: “The only things I find rewarding … are variety and the enjoyment of diversity.”

 

A major review paper by political psychologist John Jost found a few other traits, but nearly all of them are conceptually related to threat sensitivity (e.g., conservatives react more strongly to reminders of death) or openness to experience (e.g., liberals have less need for order, structure, and closure).

 

These are traits such as threat sensitivity, novelty seeking, extraversion, and conscientiousness. These traits are not mental modules that some people have and others lack; they’re more like adjustments to dials on brain systems that everyone has.

 

Morality binds and blinds. It binds us into ideological teams that fight each other as though the fate of the world depended on our side winning each battle. It blinds us to the fact that each team is composed of good people who have something important to say.

 

In Part I, I presented the first principle of moral psychology: Intuitions come first, strategic reasoning second. I explained how I came to develop the social intuitionist model, and I used the model to challenge the “rationalist delusion.” The heroes of this part were David Hume (for helping us escape from rationalism and into intuitionism) and Glaucon (for showing us the overriding importance of reputation and other external constraints for creating moral order). If you bring one thing home from this part of the trip, may I suggest that it be the image of yourself—and everyone else around you—as being a small rider on a very large elephant. Thinking in this way can make you more patient with other people. When you catch yourself making up ridiculous post hoc arguments, you might be slower to dismiss other people just because you can so easily refute their arguments. The action in moral psychology is not really in the pronouncements of the rider.

 

The second part of our tour explored the second principle of moral psychology: There’s more to morality than harm and fairness. I recounted my time in India, and how it helped me to step out of my moral matrix and perceive additional moral concerns. I offered the metaphor that the righteous mind is like a tongue with six taste receptors. I presented Moral Foundations Theory and the research that my colleagues and I have conducted at YourMorals.org on the psychology of liberals and conservatives. The heroes of this part were Richard Shweder (for broadening our understanding of the moral domain) and Emile Durkheim (for showing us why many people, particularly social conservatives, value the binding foundations of loyalty, authority, and sanctity). If you take home one souvenir from this part of the tour, may I suggest that it be a suspicion of moral monists. Beware of anyone who insists that there is one true morality for all people, times, and places—particularly if that morality is founded upon a single moral foundation. Human societies are complex; their needs and challenges are variable. Our minds contain a toolbox of psychological systems, including the six moral foundations, which can be used to meet those challenges and construct effective moral communities. You don’t need to use all six, and there may be certain organizations or subcultures that can thrive with just one. But anyone who tells you that all societies, in all eras, should be using one particular moral matrix, resting on one particular configuration of moral foundations, is a fundamentalist of one sort or another.

 

The philosopher Isaiah Berlin wrestled throughout his career with the problem of the world’s moral diversity and what to make of it. He firmly rejected moral relativism: I am not a relativist; I do not say “I like my coffee with milk and you like it without; I am in favor of kindness and you prefer concentration camps”—each of us with his own values, which cannot be overcome or integrated. This I believe to be false.1 He endorsed pluralism instead, and justified it in this way: I came to the conclusion that there is a plurality of ideals, as there is a plurality of cultures and of temperaments.… There is not an infinity of [values]: the number of human values, of values which I can pursue while maintaining my human semblance, my human character, is finite—let us say 74, or perhaps 122, or 27, but finite, whatever it may be. And the difference this makes is that if a man pursues one of these values, I, who do not, am able to understand why he pursues it or what it would be like, in his circumstances, for me to be induced to pursue it. Hence the possibility of human understanding.2 In the third part of our tour I presented the principle that morality binds and blinds. We are products of multilevel selection, which turned us into Homo duplex. We are selfish and we are groupish. We are 90 percent chimp and 10 percent bee. I suggested that religion played a crucial role in our evolutionary history—our religious minds coevolved with our religious practices to create ever-larger moral communities, particularly after the advent of agriculture. I described how political teams form, and why some people gravitate to the left, others to the right. The heroes of this part were Charles Darwin (for his theory of evolution, including multilevel selection) and Emile Durkheim (for showing us that we are Homo duplex, with part of our nature forged, perhaps, by group-level selection). If you bring one thing home from this last part of the trip, may I suggest that it be the image of a small bump on the back of our heads—the hive switch, just under the skin, waiting to be turned on. We’ve been told for fifty years now that human beings are fundamentally self...

 

This book explained why people are divided by politics and religion. The answer is not, as Manichaeans would have it, because some people are good and others are evil. Instead, the explanation is that our minds were designed for groupish righteousness. We are deeply intuitive creatures whose gut feelings drive our strategic reasoning. This makes it difficult—but not impossible—to connect with those who liv...

This highlight has been truncated due to consecutive passage length restrictions.

 

So the next time you find yourself seated beside someone from another matrix, give it a try. Don’t just jump right in. Don’t bring up morality until you’ve found a few points of commonality or in some other way established a bit of trust. And when you do bring up issues of morality, try to start with some praise, or with a sincere expression of interest. We’re all stuck here for a while, so let’s try to work it out.

 

Next, I thank my gang, the team at YourMorals.org

 

Children generally like equality, until they near puberty, but as their social intelligence matures they stop being rigid egalitarians and start becoming proportionalists;

 

Our goal with Moral Foundations Theory and YourMorals.org has been to find the best bridges between anthropology and evolutionary psychology, not the complete set of bridges. We think the six we have identified are the most important ones, and we find that we can explain most moral and political controversies using these six. But there are surely additional innate modules that give rise to additional moral intuitions. Other candidates we are investigating include intuitions about honesty, ownership, self-control, and waste. See MoralFoundations.org to learn about our research on additional moral foundations.

 

Berlin 1997/1958 referred to this kind of liberty as “negative liberty”—the right to be left alone. He pointed out that the left had developed a new concept of “positive liberty” during the twentieth century—a conception of the rights and resources that people needed in order to enjoy liberty.

 

As Dawkins 1976 so memorably put it. Genes can only code for traits that end up making more copies of those genes. Dawkins did not mean that selfish genes make thoroughly selfish people.

 

Of course we are groupish in the minimal sense that we like groups, we are drawn to groups. Every animal that lives in herds, flocks, or schools is groupish in that sense. I mean to say far more than this. We care about our groups and want to promote our group’s interests, even at some cost to ourselves. This is not usually true about animals that live in herds and flocks

 

We’re not literally a majority of the world’s mammalian weight, but that’s only because we raise so many cows, pigs, sheep, and dogs. If you include us together with our domesticated servants, our civilizations now account for an astonishing 98 percent of all mammalian life, by weight, according to a statement by Donald Johanson, made at a conference on “Origins” at Arizona State University in April 2009. 

_020220228N.jpg
bottom of page