Thinking, Fast and Slow by Daniel Kahneman: a Mostly-Complete Summary
Consider the following question:
Steve is very shy and withdrawn, invariably helpful but with little interest in people or in the world of reality. A meek and tidy soul, he has a need for order and structure, and a passion for detail. Is Steve more likely to be a librarian or a farmer?
Steve is a librarian, clearly: his description matches a stereotypical librarian, and upon hearing it we picture a quiet man with round glasses checking out books for a living.
But statistically speaking, Steve is probably not a librarian. Male librarians are exceedingly rare in the United States, and male farmers are not. In fact, there are 20 times as many male farmers as male librarians in the United States, suggesting that at best, the conclusion that Steve is a librarian is tenuous.
If you are one of the majority who guessed that Steve was a librarian, then you likely fell victim to the “resemblance heuristic”: instead of asking what Steve was more likely to be, you asked which stereotype Steve resembled most.
The resemblance heuristic is one of several biases that Daniel Kahneman, winner of the Nobel Prize in Economics, outlines in his book Thinking, Fast and Slow. In the book, Kahneman recounts decades of research from himself and the late Amos Tversky into human statistical intuition and the biases that undermine it. Human intuition relies on many types of heuristics and biases, which are generally accurate and useful but are also susceptible to major mistakes.
Systems 1 and 2
The book’s framework relies on two “characters of the story”: System 1 and System 2. Kahneman writes that the brain primarily relies on these two systems for thinking: System 1 operates “automatically and quickly, with little or no effort and no sense of voluntary control,” and System 2 “allocates attention to the effort mental activities that demand it, including complex computations.” Because System 2 is more high-effort, we primarily rely on System 1 in our daily lives, and System 2 usually accepts whatever System 1 suggests. System 2, then, can be described as quite lazy.
The laziness of System 2 creates important distinctions between “rational” and “intelligent” people. According to one definition, rational people have a more active (a less lazy) System 2, and thus rely more heavily on it for thinking. For instance, consider the question:
How many murders occur in the state of Michigan in one year?
System 1, with its quick recall and strong intuition, may note that Michigan is a somewhat cold state with both urban and rural areas, leading some people to estimate a low-to-medium number of murders. But someone with an active System 2 may pause and think about the composition of Michigan’s cities: Detroit, for instance, is a very high crime rate city.
People with a more active System 2 are thus described as more “rational,” but on a more tangible level may be more focused and have more self-control. The famous marshmallow and delayed gratification experiment with children is an example of this thinking.
In contrast to System 2’s laziness, System 1 is incredibly active, creating associative thoughts and feelings with the slightest triggers, an effect Kahneman calls “priming.” For instance, the word Florida immediately invokes thoughts of old age, of vacationing, and of confusing behavior. This has implications on human rationality: people who vote in schools are more likely to support pro-education policies, people who are shown money and salary-related info tend to sit less close to their peers. These priming effects suggest that small triggers can strongly influence peoples’ decisions.
A related phenomenon is cognitive ease. When things are going well, there’s no need to think about change or put in extra effort, inducing cognitive ease. More specifically, a repeated experience, a clearly-displayed idea, a primed idea, or a good mood can all drive cognitive ease, which leads to feelings of familiarity, truth, and effortlessness. For example, writing in bold, high-contrast text in a legible font with repetition can induce cognitive ease, making your writing more persuasive.
An interesting byproduct is that cognitively strainful activities, such as reading from text with a bad font, can engage System 2, which leads to more rational thinking. For instance, Princeton students were less likely to make silly mistakes on trick math questions with difficult-to-read fonts.
Cognitive ease thus has strong implications on truth, persuasion, and decision-making. People find repetition and clarity persuasive. They are more likely to believe repeated falsehoods. They invest in stocks with easy-to-read tickers. People are also more likely to think critically about unfamiliar ideas.
Norms and Causality
Just as System 1 is effective at creating associations, it is also efficient for judging norms and causality.
System 1 judges normality using two common heuristics: past experiences, and contextual familiarity. For instance, if you meet someone you know on a vacation, you will be surprised, but you are often less surprised when you meet them on another vacation, even though the probability of seeing the same person on two different vacations is quite rare. Moreover, the question “How many animals of each kind did Moses take into the ark?” seems completely fine to System 1, but upon closer examination we realize that Moses did not take animals into the ark; Noah did. These two examples suggest that System 1 is remarkably quick at detecting norms, but its heuristics are often inaccurate.
Similarly, System 1 is quick to jump to causal conclusions. When reading the sentence “After Jane spent the day strolling through New York, she discovered her wallet was missing,” our System 1 immediately jumps to potential causes of pickpockets or a wallet slipping out of Jane’s pockets. These causes are likely accurate, but issues arise when our System 1 is overly sensitive to ascribing causes, such as how people are quick to blame their failures on others or eager to describe why certain fluctuations occur in the stock market.
Confirmation biases
According to psychologist Daniel Gilbert, the only way to understand a statement is to begin with an attempt to believe it. Only then can you decide whether or not to unbelieve it. System 1 is usually responsible for believing: does a statement seem credible? Is it familiar? System 2 is responsible for unbelieving. As you might guess, this creates a dilemma where our system of “believing” is fast and intuitive, whereas our “unbelieving” system is slow and lazy.
This biases ourselves towards confirmation. For instance, asking “Is Sam friendly?” leads to a far nicer image of Sam than “Is Sam unfriendly?”, even though the question should not bias our assessment of Sam’s friendliness.
A similar bias is the halo effect, where initial and partial impressions of someone can bias us to make judgements about their entire character. This is why liking a president’s politics also makes someone more likely to like their voice and appearance.
However, according to Kahneman, the most significant version of confirmation bias is what he calls “What You See is All There is” (WYSIATI”). System 1 attempts to create coherent, causal stories, be it through inferring causation, priming, or through the halo effect, but it only relies on the information that it can reach. Because System 1 is so good at quickly creating intuitive, coherent stories, it leads us to discount that we may not know the whole story. In other words, what we see is all that matters: WYSIATI. This leads us to be overconfident on issues we are uniformed on, often failing to account for that we have incomplete evidence.
Judgement
Historically, System 1 has been shaped by evolutions and situations fundamental to human survival. This includes questions such as: How are things going? Is this a threat or opportunity? Is this weird?
This means System 1 is very good at making basic judgements very quickly, such as interpreting language and determining whether a person’s face is friendly or antagonistic. But what is considered a more complex question?
First, since System 1 is a categorically oriented system, it is better at thinking of associations and examples than thinking with numbers. This is why thinking of the average height of a certain group is easy — it relates to categorical thinking — but the sum of the heights is much harder.
Second, matters of opinion tend to be quite complex, be it opinions on fashion, politics, or sports. Our System 1 attempts to formulate opinions by simplifying the questions asked, creating simpler questions called “heuristics.” For instance, the question “How much would you contribute to save an endangered species?” is very complex, involving cost-benefit analyses, emotions, and opportunity costs. But the heuristic is more-so “How much emotion do I feel when I think of dying dolphins?”, which is far simpler. Thus, our System 1 tends to simplify complex questions, give us an answer of varying intensity and opinion, and then scales the intensity of our opinion to the original question.
Kahneman goes on to outline some very well-known heuristics:
The mood heuristic for happiness is where we use simpler questions to answer whether we are generally happy. For instance, questionnaires that ask about peoples’ dating lives tend to have significantly different answers for happiness levels. This suggests that dating life becomes a strong heuristic for happiness when it is brought into the attention of System 1.
The affect heuristic is where our likes and dislikes determine our beliefs about the world. For instance, we may like the idea of Medicare For All, even before we dive into the literature about spending costs or innovation harms. However, since we like Medicare For All, we go backwards and determine that spending and innovation effects pale compared to the benefits of this policy. Unfortunately, the affect heuristic is all too common in political debates; people tend to establish positions before fully understanding the issues, leading them to search for agreeable evidence on the nuances of issues.
Heuristics and Biases
Here are some common biases that result from an active System 1 and lazy System 1:
The Law of Small Numbers: We do a bad job of understanding small, uncredible sample sizes. Small sample sizes are unreliable, but we still allow them to influence our opinions greatly. This is one example of how people pay more attention to the content of messages than their reliability.
Cause and Chance: We are prone to ascribing chance and randomness to highly causal events. In other words, even if we see a truly random process, we still may think it is not random.
Anchoring: Hearing one number influences our estimates of another number. For instance, hearing the question “Was Gandhi older or younger than 140 when he died?” influences our later estimates of Gandhi’s age when he died, even if the number 140 should have no relevance to it. This is because, with no other information, our brain reaches for the closest possible comforting number, and the last number heard brings about feelings of cognitive ease. This effect is so strong that completely unrelated events can anchor our estimates. The anchoring effect has significant effects on negotiations as well, as opening offers anchor our expectations of reasonable later offers.
Availability: When estimating the size of a category or frequency of an event, we report an impression of the ease with which each instance comes to mind. This is why spouses’ self-assessed contributions to household chores generally add up to more than 100%: instead of judging the frequency with which they contribute, they think of instances where they contributed.
An interesting study shows a byproduct of this effect. When asked to recall more instances of bravery, participants rated themselves as less brave because they had difficulty recalling more and more instances of bravery. Clearly, people overestimate the availability of these memories, leading them to classify themselves as less brave.
This has important implications on public policy because the public generally acts on availability, not on proportionate risk. For instance, the public may overestimate issues of terrorism because the imagery is so vivid, while other sorts of deaths are ignored.
Representativeness: People tend to ignore base rates when evaluating probabilities, instead going by representativeness. A good example is the farmer/librarian question from the beginning: people looked at whether Steve represented a farmer or librarian better, rather than at the base rates.
Another implication of representativeness is how people value goods. For instance, people were willing to pay more for a set of 10 well-made pieces of china than a set of 15 well-made pieces of china and 5 broken pieces. On base rates alone, the second option is far better value, but its representativeness leads people to undervalue it.
Regression to the mean: People often overestimate the value of skill or brilliance in success; it is often also a result of luck. This means that a stellar first performance is likely to be less stellar the second time, as individuals regress to the mean.
Consider a golfer who does very well on the first day. We can infer that skill played a large part in his/her success, but how much of it can be attributed to skill? If the correlation is R = .9, then we know much of it is skill, but still part of it is luck. We cannot assume the golfer will be as lucky next time, so it is only rational to predict that the golfer may do somewhat worse the next day.
Taming intuition: To tame our intuitions, we should base our prediction off base rates and then adjust them according to the strength of the evidence given. For instance, if we want to predict the college GPA of a child who was very smart in first grade, we may first look at the average college GPA, and then adjust the child’s GPA according to the strength of correlation between a smart first grader and their college GPA.
The Illusion of Understanding
System 1 is prone to simplistic, coherent explanations of the past, leading us to be overly confident in extrapolating past occurrences to the future.
For instance, there are many stories of people who “knew the 2008 financial crisis was inevitable,” but those people were silent prior to the crisis. In fact, there were some people who thought the crisis would happen, but nobody knew for a fact it would happen, and very few people were actually confident enough to and bet and profit off such a crisis.
In general, we construct coherent explanations for past events that were difficult to predict prior to their occurrence. We think routine surgeries that go wrong were due to risk-taking surgeons, even though they may be rare mistakes; we think that policies that have gone wrong were poorly thought out, even if they were made with good intentions and evaluation of evidence. In other words, System 1 is vulnerable to hindsight bias: it thinks it understands the past, but it never understood the past until it occurred.
Experts are also susceptible to these illusions, and they are often more susceptible because they can construct complex explanations to explain their mistakes. In fact, predicting the future is so difficult that one study found that expert predictions were just as or even less predictive than random chance.
Kahneman argues that in many situations, simple algorithms and formulas do a better job of evaluation and predicting than expert judgement. Simply creating a survey and scale to evaluate candidates for job interviews can do better than an interviewer’s sole opinion. Experts, of course, are highly resistant to this idea, but there are numerous examples of simple formulas being applied to highly subjective problems and outperforming experts.