top of page


What It Is

Why It Seems Scare

Why It Matters

by Steven Pinker

After finishing this book in February of 2022, I wrote,


"Pinker makes the complicated simple. With a subject as important as rationality, that takes exquisite skill."


My clippings below collapse a 432-page book into 7 pages, measured by using 12-point type in Microsoft Word.

Special note: this book is so dense with so much meat and yet I did not find so many quotable paragraphs. However, I think what I have clipped out will give you enough to decide if you might want to buy the entire banquet.

See all my book recommendations.  

Here are the selections I made:


It has become commonplace to conclude that humans are simply irrational—more Homer Simpson than Mr. Spock, more Alfred E. Neuman than John von Neumann. And, the cynics continue, what else would you expect from descendants of hunter-gatherers whose minds were selected to avoid becoming lunch for leopards?


How, then, can we understand this thing called rationality which would appear to be our birthright yet is so frequently and flagrantly flouted?


To understand what rationality is, why it seems scarce, and why it matters, we must begin with the ground truths of rationality itself: the ways an intelligent agent ought to reason, given its goals and the world in which it lives. These “normative” models come from logic, philosophy, mathematics, and artificial intelligence, and they are our best understanding of the “correct” solution to a problem and how to find it. They serve as an aspiration for those who want to be rational, which should mean everyone.


To understand what rationality is, why it seems scarce, and why it matters, we must begin with the ground truths of rationality itself: the ways an intelligent agent ought to reason, given its goals and the world in which it lives. These “normative” models come from logic, philosophy, mathematics, and artificial intelligence, and they are our best understanding of the “correct” solution to a problem and how to find it. They serve as an aspiration for those who want to be rational, which should mean everyone.


When people’s judgments deviate from a normative model, as they so often do, we have a puzzle to solve. Sometimes the disparity reveals a genuine irrationality: the human brain cannot cope with the complexity of a problem, or it is saddled with a bug that cussedly drives it to the wrong answer time and again. But in many cases there is a method to people’s madness. A problem may have been presented to them in a deceptive format, and when it is translated into a mind-friendlier guise, they solve it. Or the normative model may itself be correct only in a particular environment, and people accurately sense that they are not in that one, so the model doesn’t apply. Or the model may be designed to bring about a certain goal, and, for better or worse, people are after a different one. In the chapters to come, we will see examples of all these extenuating circumstances. The penultimate chapter will lay out how some of today’s florid outbursts of irrationality may be understood as the rational pursuit of goals other than an objective understanding of the world.


Echoing a famous argument by the philosopher Karl Popper, most scientists today insist that the dividing line between science and pseudoscience is whether advocates of a hypothesis deliberately search for evidence that could falsify it and accept the hypothesis only if it survives.


But probabilities are not about the world; they’re about our ignorance of the world.


New information reduces our ignorance and changes the probability. If that sounds mystical or paradoxical, think about the probability that a coin I just flipped landed heads. For you, it’s .5. For me, it’s 1 (I peeked). Same event, different knowledge, different probability.


The classic errors in reasoning are often called “cognitive illusions,” and the parallels with the visual illusions familiar from cereal boxes and science museums are instructive. They run deeper than the obvious fact that our eyes and minds can trick us. They explain how our species can be so smart and yet so easily deluded.


And as excellent as our cognitive systems are, in the modern world we must know when to discount them and turn our reasoning over to instruments—the tools of logic, probability, and critical thinking that extend our powers of reason beyond what nature gave us.


Because in the twenty-first century, when we think by the seat of our pants, every correction can make things worse, and can send our democracy into a graveyard spiral.


Rationality is uncool. To describe someone with a slang word for the cerebral, like nerd, wonk, geek, or brainiac, is to imply they are terminally challenged in hipness. For decades, Hollywood screenplays and rock song lyrics have equated joy and freedom with an escape from reason. “A man needs a little madness or else he never dares cut the rope and be free,” said Zorba the Greek. “Stop making sense,” advised Talking Heads; “Let’s go crazy,” adjured the Artist Formerly Known as Prince.


A definition that is more or less faithful to the way the word is used is “the ability to use knowledge to attain goals.”


The beliefs, moreover, must be held in service of a goal. No one gets rationality credit for merely thinking true thoughts, like calculating the digits of π


Even the humdrum rationality of seeing rather than hallucinating is in the service of the ever-present goal built into our visual systems of knowing our surroundings.


With this definition the case for rationality seems all too obvious: do you want things or don’t you? If you do, rationality is what allows you to get them.


“Ambition must be made to counteract ambition,” wrote James Madison about the checks and balances in a democratic government, and that is how other institutions steer communities of biased and ambition-addled people toward disinterested truth. Examples include the adversarial system in law, peer review in science, editing and fact-checking in journalism, academic freedom in universities, and freedom of speech in the public sphere. Disagreement is necessary in deliberations among mortals. As the saying goes, the more we disagree, the more chance there is that at least one of us is right.


Rationality rejecters can refuse to play the game. They can say, “I don’t have to justify my beliefs to you. Your demands for arguments and evidence show that you are part of the problem.” Instead of feeling any need to persuade, people who are certain they are correct can impose their beliefs by force. In theocracies and autocracies, authorities censor, imprison, exile, or burn those with the wrong opinions. In democracies the force is less brutish, but people still find means to impose a belief rather than argue for it. Modern universities—oddly enough, given that their mission is to evaluate ideas—have been at the forefront of finding ways to suppress opinions, including disinviting and drowning out speakers, removing controversial teachers from the classroom, revoking offers of jobs and support, expunging contentious articles from archives, and classifying differences of opinion as punishable harassment and discrimination.7 They respond as Ring Lardner recalled his father doing when the writer was a boy: “ ‘Shut up,’ he explained.”


The psychologist Walter Mischel captured the conflict in an agonizing choice he gave four-year-olds in a famous 1972 experiment: one marshmallow now or two marshmallows in fifteen minutes.15 Life is a never-ending gantlet of marshmallow tests, dilemmas that force us to choose between a sooner small reward and a later large reward.


Watch a movie now or pass a course later; buy a bauble now or pay the rent later; enjoy five minutes of fellatio now or an unblemished record in the history books later.


As Homer Simpson said to Marge when she warned him that he would regret his conduct, “That’s a problem for future Homer. Man, I don’t envy that guy.”


When you combine self-interest and sociality with impartiality—the interchangeability of perspectives—you get the core of morality.


Many of our fiercest controversies involve decisions on how to reconcile fuzzy family resemblance concepts with the classical categories demanded by logic and law.


Plane crashes, in contrast, get lavish coverage, but they kill only about 250 people a year worldwide, making planes about a thousand times safer per passenger mile than cars.


Yet nuclear power has stalled for decades in the United States and is being pushed back in Europe, often replaced by dirty and dangerous coal. In large part the opposition is driven by memories of three accidents: Three Mile Island in 1979, which killed no one; Fukushima in 2011, which killed one worker years later (the other deaths were caused by the tsunami and from a panicked evacuation); and the Soviet-bungled Chernobyl in 1986, which killed 31 in the accident and perhaps several thousand from cancer, around the same number killed by coal emissions every day.


Availability, to be sure, is not the only distorter of risk perception. Paul Slovic, a collaborator of Tversky and Kahneman, showed that people also overestimate the danger from threats that are novel (the devil they don’t know instead of the devil they do), out of their control (as if they can drive more safely than a pilot can fly), human-made (so they avoid genetically modified foods but swallow the many toxins that evolved naturally in plants), and inequitable (when they feel they assume a risk for another’s gain).


The worst terrorist attack in history by far was 9/11, and it claimed 3,000 lives; in most bad years, the United States suffers a few dozen terrorist deaths, a rounding error in the tally of homicides and accidents. (The annual toll is lower, for example, than the number of people killed by lightning, bee stings, or drowning in bathtubs.) Yet 9/11 led to the creation of a new federal department, massive surveillance of citizens and hardening of public facilities, and two wars which killed more than twice as many Americans as the number who died in 2001, together with hundreds of thousands of Iraqis and Afghans. 


To take another low-death/high-fear hazard, rampage killings in American schools claim around 35 victims a year, compared with about 16,000 routine police-blotter homicides.25 Yet American schools have implemented billions of dollars of dubious safety measures, like installing bulletproof whiteboards and arming teachers with pepperball guns, while traumatizing children with terrifying active-shooter drills.


These upheavals were driven by the impression that African Americans are at serious risk of being killed by the police. Yet as with terrorism and school shootings, the numbers are surprising. A total of 65 unarmed Americans of all races are killed by the police in an average year, of which 23 are African American, which is around three tenths of one percent of the 7,500 African American homicide victims.


Not just beside the point but taboo. A communal outrage inspires what the psychologist Roy Baumeister calls a victim narrative: a moralized allegory in which a harmful act is sanctified, the damage consecrated as irreparable and unforgivable.29 The goal of the narrative is not accuracy but solidarity. Picking nits about what actually happened is seen as not just irrelevant but sacrilegious or treasonous.30


The press is an availability machine. It serves up anecdotes which feed our impression of what’s common in a way that is guaranteed to mislead. Since news is what happens, not what doesn’t happen, the denominator in the fraction corresponding to the true probability of an event—all the opportunities for the event to occur, including those in which it doesn’t—is invisible, leaving us in the dark about how prevalent something really is.


As the economist Max Roser points out, news sites could have run the headline 137,000 People Escaped Extreme Poverty Yesterday every day for the past twenty-five years.33 But they never ran the headline, because there was never a Thursday in October in which it suddenly happened. So one of the greatest developments in human history—a billion and a quarter people escaping from squalor—has gone unnoticed.


The ignorance is measurable. Pollsters repeatedly find that while people tend to be too optimistic about their own lives, they are too pessimistic about their societies. For instance, in most years between 1992 and 2015, an era that criminologists call the Great American Crime Decline, a majority of Americans believed that crime was rising.


Calamity-peddling journalism also sets up perverse incentives for terrorists and rampage shooters, who can game the system and win instant notoriety.37 And a special place in Journalist Hell is reserved for the scribes who in 2021, during the rollout of Covid vaccines known to have a 95 percent efficacy rate, wrote stories on the vaccinated people who came down with the disease—by definition not news (since it was always certain there would be some) and guaranteed to scare thousands from this lifesaving treatment.


Consumers of news should be aware of its built-in bias and adjust their information diet to include sources that present the bigger statistical picture: less Facebook News Feed, more Our World in Data.38 Journalists should put lurid events in context. A killing or plane crash or shark attack should be accompanied by the annual rate, which takes into account the denominator of the probability, not just the numerator. A setback or spate of misfortunes should be put into the context of the longer-term trend.


Though editors have told me that readers hate math and will never put up with numbers spoiling their stories and pictures, their own media belie this condescension. People avidly consume data in the weather, business, and sports pages, so why not the news?


People are surprised to learn that if 23 people are in a room, the chances that two will share a birthday are better than even. With 57 in the room, the odds rise to 99 percent. Though it’s unlikely that anyone in the room will share my birthday, we’re not looking for matches with me, or with anyone else singled out a priori. We’re counting matches post hoc, and there are 366 ways for a match to occur.


The cluster illusion, like other post hoc fallacies in probability, is the source of many superstitions: that bad things happen in threes, people are born under a bad sign, or an annus horribilis means the world is falling apart. When a series of plagues is visited upon us, it does not mean there is a God who is punishing us for our sins or testing our faith. It means there is not a God who is spacing them apart.


Extraordinary claims require extraordinary evidence. —Carl Sagan


As Francis Crick liked to say, “Any theory that can account for all the facts is wrong, because some of the facts are wrong.”


A pithier version of the Bayesian argument against paranormal claims was stated by the astronomer and science popularizer Carl Sagan (1934–1996) in the slogan that serves as this chapter’s epigraph: “Extraordinary claims require extraordinary evidence.” An extraordinary claim has a low Bayesian prior. For its posterior credence to be higher than the posterior credence in its opposite, the likelihood of the data given that the hypothesis is true must be far higher than the likelihood of the data given that the hypothesis is false. The evidence, in other words, must be extraordinary.


Given the costs of information, the perfect can be the enemy of the good.


Tversky and Kahneman note that no one would buy probabilistic insurance, with premiums at a fraction of the cost but coverage only on certain days of the week, though they happily incur the same overall risk by insuring themselves against some hazards, like fires, but not others, like hurricanes.27 They buy insurance for peace of mind—to give themselves one less thing to worry about.


This may also explain societal decisions such as banning nuclear power, with its tiny risk of a disaster, rather than reducing the use of coal, with its daily drip of many more deaths.


The American Superfund law calls for eliminating certain pollutants from the environment completely, though removing the last 10 percent may cost more than the first 90 percent.


The US Supreme Court justice Stephen Breyer commented on litigation to force the cleanup of a toxic waste site: “The forty-thousand-page record of this ten-year effort indicated (and all the parties seemed to agree) that, without the extra expenditure, the waste dump was clean enough for children playing on the site to eat small amounts of dirt daily for 70 days each year without significant harm. . . . But there were no dirt-eating children playing in the area, for it was a swamp. . ...

This highlight has been truncated due to consecutive passage length restrictions.


Kahneman and Tversky conclude that people are not risk-averse across the board, though they are loss-averse: they seek risk if it may avoid a loss.29


But the two pairs of options pose the same odds: all that changed was whether they were framed as the number who lived, perceived as a gain, or the number who died, perceived as a loss.


As Tversky once asked me when we were colleagues, “How many things could happen to you today that could make you much better off? How many things could happen to you today that could make you much worse off? The second list is bottomless.”


All those taboos, bounds, intransitivities, flip-flops, regrets, aversions, and framings merely show that people flout the axioms, not that they ought to. To be sure, in some cases, like the sacredness of our relationships and the awesomeness of death, we really may be better off not doing the sums prescribed by the theory. But we do always want to keep our choices consistent with our values. That’s all that the theory of expected utility can deliver, and it’s a consistency we should not take for granted. We call our decisions foolish when they subvert our values and wise when they affirm them.


We have already seen that some breaches of the axioms truly are foolhardy, like avoiding tough societal tradeoffs, chasing zero risk, and being manipulated by a choice of words. I suspect there are countless decisions in life where if we did multiply the risks by the rewards we would choose more wisely.

When you buy a gadget, should you also buy the extended warranty pushed by the salesperson? About a third of Americans do, forking over $40 billion a year. But does it really make sense to take out a health insurance policy on your toaster? The stakes are smaller than insurance on a car or house, where the financial loss would have an impact on your well-being. If consumers thought even crudely about the expected value, they’d notice that an extended warranty can cost almost a quarter of the price of the product, meaning that it would pay off only if the product had more than a 1 in 4 chance of breaking. A glance at ...


It doesn’t take a lot of math to show that the expected utility of ovarian cancer screening is negative.41 The same is true for men when it comes to screening for prostate cancer with the prostate-specific antigen test (I opt out). These are easy cases; we’ll take a deeper dive into how to compare the costs and benefits of hits and false alarms in the next chapter.


Enhancing sensitivity should always be our aspiration in signal detection challenges, and that brings us to one of its most important applications.


It’s captured in the saying “Don’t throw good money after bad” and in the First Law of Holes: “When you’re in one, stop digging.” One of the most commonly cited human irrationalities is the sunk-cost fallacy, in which people continue to invest in a losing venture because of what they have invested so far rather than in anticipation of what they will gain going forward. Holding on to a tanking stock, sitting through a boring movie, finishing a tedious novel, and staying in a bad marriage are familiar examples.


The common rationale is “We fight so that our boys will not have died in vain,” a textbook example of the sunk-cost fallacy but also a tactic in the pathetic quest for a Pyrrhic victory.


Regression to the mean happens whenever two variables are imperfectly correlated, which means that we have a lifetime of experience with it. Nonetheless, Tversky and Kahneman have shown that most people are oblivious to the phenomenon


People’s attention gets drawn to an event because it is unusual, and they fail to anticipate that anything associated with that event will probably not be quite as unusual as that event was. Instead, they come up with fallacious causal explanations for what in fact is a statistical inevitability. A tragic example is the illusion that criticism works better than praise, and punishment better than reward.11 We criticize students when they perform badly. But whatever bad luck cursed that performance is unlikely to be repeated in the next attempt, so they’re bound to improve, tricking us into thinking that punishment works. We praise them when they do well, but lightning doesn’t strike twice, so they’re unlikely to match that feat the next time, fooling us into thinking that praise is counterproductive.


After a spree of horrific crimes is splashed across the papers, politicians intervene with SWAT teams, military equipment, Neighborhood Watch signs, and other gimmicks, and sure enough, the following month they congratulate themselves because the crime rate is not as high. Psychotherapists, too, regardless of their flavor of talking cure, can declare unearned victory after treating a patient who comes in with a bout of severe anxiety or depression.


The Winner’s Curse applies to any unusually successful human venture, and our failure to compensate for singular moments of good fortune may be one of the reasons that life so often brings disappointment.


For many years coffee was blamed for heart disease, because coffee drinkers had more heart attacks. It turned out that coffee drinkers also tend to smoke and avoid exercise; the coffee was an epiphenomenon.


As Upton Sinclair pointed out, “It is difficult to get a man to understand something, when his salary depends upon his not understanding it.”

bottom of page