Thinking, Fast and Slow - Two Systems
This part of [[Thinking, Fast and Slow, Kahneman]]N goes over the model of two systems of thought:
- System 1, the instant, unconcious and inutitive form of thinking.
- System 2, the slower, concious and more rational form of thinking.
It explores the consequences of this model, explains reasons why it exists and explores how it can be predictably wrong in certain ways.
-
Notes
- System 1 and System 2
-
Mental Effort
- Add-1 and Add-3
- The Pupil as a Window to the Soul
- Levels of Effort in Everyday Life
- Electricity Metaphor
- Allocation of Attention as an Evolutionary Advantage
- Skill Reduces Mental Effort
- Money Metaphor
- What Makes Something Difficult?
- How Do We Make Things Less Difficult?
- The Law of Least Effort
- Consequences of Mental Effort
- Ego Depletion and Mental Effort
- Depleting Self-Control
- Consequences of Poor Self-Control
- Experiments Studying Self-Control
- Flow
- Laziness and Intelligence
- Association
- Cognitive Ease and Cognitive Strain
- Norms, Suprises and Causes
-
Jumping to Conclusions
- Why Jump to Conclusions
- Decisions, Big and Small
- No Alternatives When Jumping to Conclusions
- Confirmation Bias
- Confirmation Bias as Ambiguity
- Halo Effect
- Example: Confirmation Bias and Assesment Marking
- Decorrelating Error
- Examples of Decorrelating Error
- What You See Is All There Is (WYSIATI)
- Misappropriate Models
- Consequences of WYSIATI
- How Judgements Happen
- Answering an Easier Question
- Summary of System 1
- Flashcards
Notes
System 1 and System 2
This is one of the most central ideas in the book. There are two “systems” of thought:
-
System 1, which operates quickly and automatically. It requires little or no effort and there’s no sense of voluntary control.
- Detect one object is further away than another.
- Orient to the source of a sudden sound.
- Complete the phrase “Bread and…”.
- Make a “disgust face” when shown a horrible picture.
- Detect hostility in a voice.
- Answer $2 + 2 = ?$.
- Read words on large billboards.
- Drive a car on an empty road.
- Find a strong move in chess (if you are a chess master).
- Understand simple sentences.
- Recognise that a “meek and tidy soul with a passion for detail” resembles an occupational stereotype.
- Make movements to remain balanced while riding a bike (if you know how to ride a bike).
- Move food around effictively with a knife and a fork.
- Recognise emotions from facial expressions.
- Type a word on a keyboard.
- Scratch an itch.
- Blinking? (maybe, I’m not so sure about this one)
-
System 2, which operates when required for demanding mental activities. This is the part that “you” feel like; the sense of agency, choice and concentration.
- Brace for the starter gun in a race.
- Focus attention on the clowns in the circus.
- Focus on the voice of a particular person in a crowded and noisy room.
- Look for a woman with white hair.
- Search memory to identify a surprsiging sound.
- Maintian a faster walking speed than is natural for you.
- Monitor the appropriateness of your behavior in a social situation.
- Count the occurences of the letter a in a page of text.
- Tell someone your phone number.
- Park in a narrow space (for most people except garage attendants).
- Compare two washing machines for overall value.
- Fill out a tax form.
- Check the validity of a complex logicial argument.
- Consider the steps to solve a complex maths problem.
- Find a strong move in chess (unless you’re a grand master).
- Make movements to remain balanced while riding a bike (if you’re learning how to ride a bike)
System 1
Mental activities that seem difficult can become fast and automatic through prolonged practice – this is part of what makes someone an expert. It’s easy to see how everyone has some sort of expert intuition as described in [[Thinking, Fast and Slow - Introduction]]N here. Through prolonged practice, the once deliberate forms of thinking like understanding the words on a page or performing mental arithmetic have become speedy and unconcious.
System 1’s capacity for learning stretches over many different disiplines, as shown by the examples. It’s learnt to read, it’s learnt tiny muscle contractions that keep a bike balanced, it’s learnt to understand the nuances of social situations. The fact it can recognise occupational stereotypes also demonstrates that it’s learnt about culture. This vast network of memory is accessed without intention and effort.
System 1 is described as not having a sense of voluntary control. It’s impossible not to do $2 + 2$ or to read a word on a screen. I remember reading a YouTube comment once that said this is something to do with meditation, there would be a sense of bliss if you could temporarily fail to recognise all language and live in a voiceless world.
System 2
The distinguishing feature of System 2 is that its mental activities require attention. System 2 type tasks become increasingly difficult the more attention is drawn away from the task at hand. This is why it’s difficult to do something like $17\times28$ while also riding a bike.
System 2 can override the operations of System 1 by reprogramming the normally automatic functions of perception and memory. This too requires attention and is the same reason constantly being on the lookout would be exhausting. It’s also the same reason it’s hard to write in here when people are running around upstairs, I’m trying to write this but my attention is being drawn away.
The phrase “pay attention” reflects this: there’s a limited budget of attention. Stretching the budget too thin means that it’s impossible to do anything.
The intense engagement of System 2 can make you cognitively blind as all your intention is focused on the task at hand rather and irrelevant stimuli is blocked out. This is the reason for the suprising Gorilla basketball test.
Normal Function
This is how System 1 and System 2 interact most of the time.
- Both are active whenever we are awake.
- System 1 runs automatically.
- System 2 runs in a comfortable low-effort mode.
- System 1 continously generates suggestions for System 2:
- Impressions
- Intuitions
- Intentions
- Feelings
- If System 2 agrees with these:
- Impressions -> Beliefs
- Intutions -> Beliefs
- Intentions -> Actions
- Feelings -> Beliefs
-
This happens most of the time. You generally believe your impressions and act on your desires, the alternative would be exhausting.
- When System 1 runs into a problem it cannot quickly solve, it directs the attention of System 2 to the problem.
-
This is the surge of concious attention that you can feel. Notice the difference between thinking about $17\times27$ and then actually trying it.
- System 2 is also called on when an event is detected the violates the model of the world that System 1 maintains:
- Cows don’t bark.
- Lamps don’t jump.
- People don’t normally get beaten up at the side of the street.
- You don’t normally see your maths teacher on a run.
-
With its superior computing power, System 2 handles these situtations much better.
- System 2 also constantly monitors your own behavior:
- Staying polite when angry.
- Maintinging alertness when driving at night.
- Considering what you are about to say.
In summary: most of what you think and do happens unconciously in System 1, though when things get difficult or a model is violated, System 2 is utilised. System 1 makes a lot of decisions, but in complicated situations, System 2 normally has the last world.
Positives
This makes normal daily life highly efficient. It minimises effort but maximises performance. Most of the time, System 1 is correct: it’s models of familiar situations are accurate and it can make good short-term predictions. Why make things complicated when they don’t have to be?
Cons
System 1 is wrong in certain situations. This is a problem because automatic, incorrect decisions get made. Most of the time, these problems aren’t life or death (Question: if they wre, would evolution optimise them away? Perhaps it already did.) and so the good-enough approach is fine.
However, there are places where System 1 is predictably wrong – systematic errors that it is prone to make in specified circumstances. This is what heuristics and biasees are.
Conflict
Sometimes System 1 and System 2 are in conflict. One example of this is the classic test involving saying the colour that a word is when the word itself says something different, such as a word in a red font that says “Blue”. These are difficult because there is a conflict between the automatic System 1 response, just reading the word, and the more unusual process of saying the colour of the word.
Since System 2 mostly has the last say, one of the tasks of System 2 is to overcome the impulses of System 1. System 2 is in charge of self control.
Useful Fictions Or Fake Frameworks
See also:
System 1 and System 2 aren’t real. There’s not two characters sitting in your head and performing all these seperate tasks. It’s a useful fiction: an abstraction over a complicated topic that allows you to conceptualise it more easily. It’s the same reason why the Bohr model of the atom is still useful despite it being wrong.
The analogy of System 1 and System 2 is an especially good one because humans are intuitively good (System 1) at understanding stories and concepts involving other people. By describing these “systems” like people with their own personality, we are more likely to remember them.
Mental Effort
Add-1 and Add-3
Set a 60BPM metronome. Read a 4 digit number outloud, wait 2 beats and then repeat out loud each digit incremented by 1 (or 3). E.g. 5294 -> 6305 (Add-1) or 4965 -> 7298 (Add-3)
The Pupil as a Window to the Soul
The pupil dilates or expands in connection to many different stimuli such as mental effort and emotional arousal.
- Example: Two pictures of the same person but with different pupil sizes means one is more attractive than the other.
- Example: Shoppers sometimes where sunglasses in order to hide their level of interest from merchants.
- Example: A drug that makes you look more attractive expands your pupil size.
Pupils are very good at gauging mental effort, and the extent to which the pupil expands gives a level of how demanding the task is.
The response to most demanding tasks is like an upright triangle. In the context of Add-1 and Add-3:
- Starts off with not much mental effort, small pupils
- Builds up with each different digit, pupils get larger
- Hits a near intelorable peak, pupils are at their biggest
- Goes down as digits are offloaded from short-term memory, pupils decrease in size
If a task gets too difficult and someone gives up, pupils revert back to their normal size.
Levels of Effort in Everyday Life
Most tasks are very undemanding or have become undemanding through practice. The formal steps of making small talk are quite involved, but for most people it feels very natural. You could think of these as “mental strolls”.
Some tasks are very demanding. Add-1 and Add-2 are examples. You could think of these as “mental sprints”.
Electricity Metaphor
Mental effort is electricity.
- Turning on different appliances in your home draws different amounts of electricity.
-
Performing multiple different tasks draws different amounts of mental effort.
- Different appliances can only draw a certain amount of electricity.
- Different mental tasks can only draw a certain amount of mental effort.
-
Example: If you’re asked to remember 9384 for the next 10 seconds because your life depends on it, you still won’t draw more mental effort than you do when engaging in a mental sprint like Add-1 or Add-3. It stays fixed.
- If you use too much power in a home, the breaker flips and all devices turn off immediately.
- This is different to System 2’s response, which will selectively drop tasks and focuses on the most important activity.
Allocation of Attention as an Evolutionary Advantage
It makes sense that attention is allocated in this way. Responding to the biggest threats first and pushing out irrelevant stimuli improves chances of survival and so those with this adaptation can survive for longer. (Does this link to [[LSD Microdosing RCT, Gwern]]?, The Doors of Perception?).
In an emergency, System 1 will take over and take corrective action before it reaches conciousness or System 2. This is why a firefighter may have an intuitive sense that something is about to go wrong or why you respond to the threat of an accident in a car before you even realise.
Skill Reduces Mental Effort
As you become more skilled at something, the amount of mental effort you need to exert in order to do the task decreases.
Highly intelligent people need less effort to solve complicated problems (Question: could this be explained by a larger working memory that allows them to “chunk” things more effectively?).
Money Metaphor
Mental effort is money. You want to spend less money doing things, so The Law of Least Effort gives you an advantage.
Becoming skilled at a certain task is the result of deciding that you can reduce the overall effort later (“the aquisition of skill is driven by the balance of benefits and costs”).
What Makes Something Difficult?
- Holding several things in memory at once:
- Ideas
- Actions
- Holding big things in memory
- Time pressure
There are some things things that require mental effort to sustain them as well:
- Resisting temptation
- Keeping up a pace
You can tell these are difficult by thinking about how difficult they would be to multitask. It would be near impossible to Add-3 of a 5 digit number whilst walking faster than comfortable and at the same time resisting some chocolate that’s being dangled in front of your face.
Add-1 and Add-3 are difficult because it’s a combination of all of these things.
How Do We Make Things Less Difficult?
- Holding several things in memory:
- Storing intermediate results, think paper or long-term memory
- Holding big things in memory:
- Dividing complicated things into lots of little things. Abstraction?
- Time pressure:
- Uhhh not trying to go as fast as possible normally.
The Law of Least Effort
If there are several ways of achieving the same goal, people will eventually gravitate to the least demanding course of action.
The Law of Least Effort is a consequence of the general avoidance of mental effort. Frequent switching of tasks and quick mental work isn’t intrinsically enjoyable and so people avoid it where possible.
Consequences of Mental Effort
As was mentioned earlier, engaging System 2 to its full extent can make you effectively cognitively blind. This happens for most types of “mental sprints”, as irrelevant information (at least in the context of the task) is blocked out.
Lots of mental effort also worsens self control. Some examples:
- May use sexist langauge
- Make incorrect judgements
- Choose a chocolate cake over a fruit salad
Ego Depletion and Mental Effort
Ego Depletion is the idea that mental energy comes from the available glucose in the brain. This links back to the idea of money as a metaphor for mental effort – you can spend a certain amount of energy doing a task.
As a secondary effect, ego depletion could also affect your physical stamina since sustaining discomfort requires discipline and self-control that deterioates as mental energy is used.
Depleting Self-Control
The following depelete self-control:
- Avoiding the thought of white bears
- Inhibiting the emotional response to a stirring film
- Making a series of choices that involve conflict
- Trying to impress others
- Responding kindly to a partner’s bad behavior
- Interacting with a person of a different race (for prejudiced individuals)
These worsen self-control because they are effortful; they involve conflict in overcoming a natural tendency:
- Avoiding the thought of white bears: Thinking of white bears
- Inhibiting the emotional response to a stirring film: Displaying emotion
- Trying to impress others: modifying your normal behavior (?)
- Responding kindly to a partner’s bad bahavior: Being aggressive back to them
- Interacting with a person of a different race: Being aggressive
The expenditure of mental energy isn’t the only thing that can affect self control though. Consider:
- Sleeping poorly
- Drinking alchohol
- Being in pain
- Being hungry
Consequences of Poor Self-Control
- Deviating from one’s diet
- Overspending on impulsive purchases
- Reacting aggressibely to provocation
- Performing poorly in cognitive tasks and logical decision making
Experiments Studying Self-Control
- Stifling emotional reaction to a film meant participants couldn’t hold onto a bar for as long
- Resisting virtuouss foods meant participants gave up on a difficult cognitive task
- Causing ego depletion and then giving participants glucose made their self-cntrol better
- Parole judges were more rash in granting parole before eating lunch
Flow
[[Flow, Csikzentmilhalyi]]N seems to go against some of these principles.
A state of effortless concentration so deep that you lose sense of time, of youself and of your problems.
If people naturally want to avoid effort, then why is flow such an “optimal experience”. The book doesn’t go into huge amount of detail on this, but if I had to guess it would be because Flow is a thin line between a task being too difficult (and therefore very mentally effortful) and a task being too easy (and therefore boring).
It would be difficult to achieve a sense of flow with something like Add-3. It’s orders of magnitude more difficult than everyday existence and there’s also no clear sense of progression.
Laziness and Intelligence
Laziness
This isn’t talking about laziness as in lounging around all day and not doing much (though it’s related – a depleted System 2 won’t have the self-control to work towards something useful). This section talks about mental laziness, the charactestic of being overconfident and placing too much faith in their intutions.
Mental laziness could be thought of as being uncritical of suggestions given by System 1.
Example: Bat and Ball
A bat and ball cost $1.10 The bat costs one dollar more than the ball. How much does the ball cost?
- Automatic answer: $0.10, which is wrong. This would make the total $1.20.
- Correct answer: $0.05
Example: Rose Logic
All roses are flowers. Some flowers fade quickly. Therefore some roses fade quickly. Is this a valid argument?
- Automatic answer: Yes.
- Correct answer: No – roses might not be included in the group of flowers that fade quickly.
Checking Answers
In the two examples, System 1 gives an answer. Since System 1 suggests things to System 2, System 2 must have endorsed the answer.
Both examples are easy to validate: a few seconds of mental work would suffice.
Why don’t people check their answers? They don’t like mental effort. This is because of the Law of Least Effort.
Intelligence vs Laziness vs Rationality
People given enough time and enough incentive would easily get these questions correct. Therefore there’s a difference between intelligence and laziness. You can be super smart but never be bothered to pursue the correct answer.
So what is the opposite to being intellectually lazy? Being rational.
Avoiding the sin of intellectual sloth.
They are more alert, more intellectually active, less willing to be satisfied with superficially attractive answers and more skeptical about their intuitions.
Experiment: Two Cookies
Children were given the choice between having one cookie now or two cookies later. 10 years later, those who performed well in the experiment seemed much “better off” than those who didn’t:
- More self control: less likely to do drugs
- More intelligent: scored higher on intelligence tests
This study shows that there’s a large correlation between the ability to allocate attention (or more generally better executive control) and things like intelligence and self-control later on.
Experiment: Improving Attention
The University of Oregon did a study where they exposed children 4-6 to games that demanded attention and control.
Over the course of several months, several different things improved:
- Intelligence
- Executive control
- Emotional control
Parenting and genetics also affected the control of attention.
Consequences of Poor Control and Mental Laziness
People who uncritically follow their suggestions from System 1 are
- Impulsive
- Impatient
- Keen to accept immediate gratification
Some examples:
- More likely to recieve $3,400 now than $3,800 dollars in a month
- More likely to pay 2x as much for overnight shipping
Links to System 1 and System 2
- System 1 is impulsive and inutitive
- System 2 is capable of reasoning… but: it is also lazy. It avoids mental effort where possible.
Rationality vs Intelligence
In Rationality and the Reflective Mind, Keith Stanovich argues that there’s two parts of System 2:
- The algorithmic mind, which deals with slow thinking and demanding computations
- The reflective mind which deals with being critical. I’m not really sure I’ve done this definition justice.
The algorithmic mind gives rise to intelligence, the reflective mind gives rise to rationality.
Mental laziness is a flaw of the reflective mind. This is why you can be intelligent but lazy at the same time.
Association
Associative Memory
Associative memory is a vast network of ideas in the brain.
Ideas are densely linked.
- (Effect) Virus -> Cold
- (Category) Banana -> Fruit
- (Property) Lime -> Green
- …
When you see a word, the node in the network is activated. This causes a cascading effect of activity where nodes trigger their links, and the links trigger their own links, and so on (though the strength of activation diminishes greatly with each hop).
This happens automatically and unconciously. This the explanation for some forms of priming.
(Not) Priming
Embarrasingly, claims about priming have been refuted recently:
- https://www.nature.com/articles/d41586-019-03755-2
- https://mindhacks.com/2017/02/16/how-replicable-are-the-social-priming-studies-in-thinking-fast-and-slow/
- https://replicationindex.com/2017/02/02/reconstruction-of-a-train-wreck-how-priming-research-went-of-the-rails/
Despite the quotation:
“Disbelief is not an option. The results are not made up, nor are they statistical flukes. You have no choice but to accept that the major conclusions of these studies are true”
And even Daniel Kahneman publishing papers about the fact that researchers should be skeptical of small sample sizes.
The general jist of the idea was that associative activation could make big changes to behavior. A couple of examples were given:
- Participants who saw words associated with old age were more likely to walk slower
- People who walked slower than normal were more likely to recognise words associated with old age
- Making people smile made them happier
- Making people frown made them more emotional
- Nodding made you agree more
- Ideas of money made people more independent, selfish, cautious
- Lady Macbeth effect, feeling ashamed primed ideas of washing yourself
- Pictures of eyes instead of flowers made people more honest
He drew a couple of big conclusions about culture too:
- Religious societies being reminded of God might make them more honest
- Authoritarian societies being reminded of Big Brother might make them less independent
Some forms of priming are well established:
- Seeing a word like DOCTOR makes you recognise a word like NURSE more easily. (Semantic priming)
Association and System 1
Association is a System 1 sort of response. This explains why it can happen automatically and unconciously.
Why Does Association Exist?
Associative activation is useful for gauging threats and preparing an individual for a new situation. Since everything is in a vast network of ideas, seeing the occurence of something will mean there’s a context for future developments, i.e. you’ll be prepared.
Cognitive Ease and Cognitive Strain
Cognitive Ease
Cognitive ease is a measure of the amount of pressure on the brain at any given time. It ranges from “easy” to “strained”:
- Easy: a sign that things are going well, no threats, no major news, no need to redirect attention, no need to mobilize effort
- Strained: you’re in a difficult situation, you’re expending a lot of mental effort, there are a lot of unmet demands
Effects of Cognitive Ease
Think:
- Good mood
- Intuition
- Creativity
- Gullibility
- System 1
It means things:
- Feel familiar
- Feel true
- Feel good
- Feel effortless
This means your in a good mood, but it has negative consequences for decision making:
- Casual and superficial in thinking
- Less suspicious
Cognitive ease means that a lot of decisions are being made by System 1. This is why thinking is relatively surface level.
The effects and causes are interchangable: a good mood means you’re more likely to feel at ease, and if you feel at ease then you’re more likely to be in a good mood. A good mood means that you’re more intuitive and creative but less vigilant and prone to logical errors.
Effects of Cognitive Strain
Think:
- Vigilance
- Suspicion
- Sadness
- Analytic approach
- System 2
This means things:
- Feel wrong
- Feel difficult
Cognitive strain means that a lot of decisions are being made by System 2 since it is busy. This is why cognitive strain lends itself to a more analytic approach.
Again, the effects and causes are somewhat interchangeable: A bad mood or a threatening situation means that you’re more likely to feel cognitively strained, and if you feel strained you’re more likely to be in a bad mood. A bad mood means that you’re more suspicious and therefore less prone to logical errors.
Biological Reasons For Cognitive Ease
Cognitive ease means that things are familiar and safe, so it’s okay to let your guard down. Cognitive strain means that things aren’t familiar; things aren’t going very well, there may be a threat and vigilance is required.
Reacting cautiously to a new stimulus means you’re more likely to survive.
“That’s exciting, I’ve never heard a rumbling sound coming from the bushes before! Let me go and see what it’s about!”.
Engineering for Cognitive Ease
There are some things you can do to create a sense of cognitive ease.
- Make something seem familiar
- Make something clear:
- Use a legible font
- Use simple language
- Make important information bold
- Make something fluent
- The example given in the book is that people were biased in their ratings of ficticous companies by how easy their name was to pronounce (Artan vs Taahhut).
- Make it memorable
- Rhyming
- Putting ideas in verse
- Make it effortless
You might want to apply these techniques because cognitive ease makes things seem more true.
There are a couple reasons why this happens:
- Sometimes a message feels wrong due to a sense of cognitive strain, since you’re more likely to agree with something you can understand clearly.
- It means people are more likely to think intuitively since System 2 isn’t being mobilised as much.
- You know something is true if it is strongly linked by logical association to other beliefs or if it’s from a source you trust or like. These things induce a sense of cognitive ease. But so does legibility!
This isn’t because of super scary dark suggestion techniques, it’s because sometimes a message feels untrue due to a sense of cognitive strain – you’re more likely to agree with something you can understand clearly.
It also kind of works because it stops people thinking too hard about the argument you’re trying to make, though again this isn
The Mere Exposure Effect and Familiarity
Sometimes, judgements are based on a sense of cognitive ease or strain rather than the actual content. The book gives the example of a driving test: since they’re multiple choice, one way of passing the test is to read through a handbook and pick answers based on their sense of familarity.
If something is familiar, it induces a sense of cognitive ease. If you don’t have any other way of making a decision, you rely on this sense.
The Mere Exposure Effect is the fancy name for familiarity and repetition making people seem to like things more. If people are familiar with something, it induces a sense of cognitive ease.
Creativity in Terms of Cognitive Ease
One way of thinking about creativity is associative memory that works very well. If you can quickly iterate through many different combinations of ideas, you can think of different solutions.
Norms, Suprises and Causes
Norms
Through repeated experience, the large network of association in the brain builds up norms. Norms provide an idea of the typical or average attributes that an idea or concept takes. They also provide the range of plausible values.
For example, you know that a mouse is small, you’re not likely to have a head pop out of a pot when you open it, elephants are big, people don’t emerge from your furniture when you get home from work, tables don’t have 25 legs, water doesn’t come out of the tap red and (at least in British weather) it’s normally hotter inside a building than it is outside (this always messes me up during the summer).
There are norms for a vast number of categories, and these norms are used to detect anomalies.
Norms and Models
One of System 1’s jobs is to maintain and update a model of your world. Norms, which are largely informed by prior experience, are partly used to create this model.
Norms are formed from circumstances, events, actions and outcomes that happen regularly. As these happen more often, the links in the associative network become stronger and provide a model – an interpretation of the present as well as the expectations for your future.
Suprise!
When something doesn’t fit into your current model of the world, you could say that it violates a norm. This is one of the ways that System 2 is kicked into action.
Such events are what you feel as surprise. If there’s a surprise party on your birthday and people jump out to say “Boo!”, you are surprised because in your model of the world, people don’t emerge from furniture.
Causation
The associative network means that System 1 is very good at finding a causal story between events.
One example is the following short story:
After spending a day exploring beautiful sights in the crowded streets of New York, Jane discovered that her wallet was missing.
The first thought that explains why Jane’s wallet was missing is that she was pickpocketed. This is because the explanation is associatively coherent; although a missing wallet could be explained by numerous other factors like slipping out of a pocket or being left on a table, the association between pickpocketing and crowded streets means that we use that explanation.
Jumping to Conclusions
Why Jump to Conclusions
Jumping to conclusions is efficient. Most of the time, it is far easier to shortcut a difficult decision process if it’s likely to be correct and the costs of making a mistake are acceptable. Jumping to conclusions lives in System 1.
In situations when jumping to conclusions is risky because the situation is familiar (e.g. it violates a mental model) or because the stakes are high (e.g. going down this path has no savage bears), then System 2 is likely to intervene and a more informed decision will be made.
System 1 makes bets, and the bets are guided by the norms of past experiences.
Decisions, Big and Small
Jumping to conclusions isn’t just about big decisions. It happens all the time at an unconcious level. When you read
Ann approached the bank.
You “jump to the conclusions” that it’s the kind of bank that involves money and ATMs, not a river bank (unless you’ve been thinking about rivers a lot recently, this is an example of a norm).
Your System 1 made the tiny decision to interpret the statement in a certain way.
No Alternatives When Jumping to Conclusions
When System 1 jumps to a conclusion, there’s no concious sense of ambuigity in the interpretations. You don’t even realise that’s there’s alternatives unless you’re actively searching for alternatives.
This is because the feeling of something being ambigious involves considering multiple different incompatible ideas at the same time, something that requires a lot of mental effort since you’re filling up your working memory.
Since System 2 is lazy and busy, it doesn’t critique every suggestion from System 1.
There’s evidence that when your System 2 is depleted, such as when you’re or you’ve been drinking, you’re more likely to be influenced by empty persuasive messages.
Confirmation Bias
Confirmation bias is the tendency to search for, favour and recall information that supports your prior beliefs or values.
This is a consequence of how associative memory works: you test a hypothesis by searching for confirming evidence (a positive test strategy).
Is Sam friendly? Search for evidence that Sam is friendly, you remember all the ways Sam has been friendly in the past automatically.
Is Sam unfriendly? Search for evidence that Sam isn’t friendly, you remember all the ways Sam hasn’t been friendly in the past, rather than the ways she has. If you’re not engaging with the task, you’re less likely to consider the opposite.
Confirmation Bias as Ambiguity
Another example is Alan being described as:
- Intelligent
- Industrious
- Impulsive
- Critical
- Stubborn
- Envious
Whereas Ben is being described as:
- Envious
- Stubborn
- Critical
- Impulisve
- Industrious
- Intelligent
You’re more likely to see Alan as a good person than you are Ben, even though they were described using exactly the same adjectives.
Rather than each adjective being applied independetly to your model of that person, they were instead weighted by the prior ideas that you had.
Every adjective is ambigious on its own. If you start with a blank slate, every subsequent adjective is interpreted in a way that makes it coherent with the context and tells the best mental story.
Halo Effect
The Halo Effect is the tendency for positive impressions of a person, company, brand or product (but really any ‘concept’) to impact your feeling about that thing in other areas.
- If you like the president’s politics, you probably like their voice and appeareance.
- If you meat a woman named Joan at a party and she seems easy to talk to and you’re asked about the probability she contributes to charity, you’re more likely to overestimate because of your existing impressions of her – when in reality, being easy to talk to doesn’t correlate well with giving to charity.
- If someone is attractive, you’re more likely to agree with them. Without evidence of how true they actually are, your decision is filled by a guess that fits your emotional response to them.
This is down to confirmation bias.
Without hard evidence (which is rarely pursued due to System 2’s laziness), you jump to conclusions about that person using your previous impressions and initial emotional response.
These concepts provide a framework for why first impressions matter: your interpretations of a person are shaped by slowly accumulating evidence that that is distorted by the first impression.
Example: Confirmation Bias and Assesment Marking
- The author used to mark student’s work one at a time.
- He found that if most students did well on the first question, they would be more likely to get the rest of the questions correct.
- He suspected that this method of grading created a Halo Effect; the first question marked would bias the other questions.
- He switched to a method where he marked each question independently, without knowing how the student did on the previous questions.
- This made his marking more accurate.
Decorrelating Error
Sometimes, a group of people making a decision will do better than a single person making a decision.
This is because errors will average out to zero: some people will be pretty accurate, some will overestimate and some will underestimate.
This only works if the errors are uncorrelated. If there’s a certain common reason for making an error, such as a bias, then this will not happen.
To derive the most useful information from multiple sources of evidence, you should always try and make these sources independent of each other.
Examples of Decorrelating Error
- When the police are collecting reports from multiple witnesses, the witnesses aren’t allowed to discuss it before they’ve made a statement. This prevents unbiases witnesses from influencing each other.
- “He looked angry”
- “Oh yeah he did”
- In order to make meetings more effective, participants should briefly write down their thoughts beforehand. This is to prevent giving too much weight to the opinions of those who speak early and assertively.
What You See Is All There Is (WYSIATI)
This goes back to the idea that there are No Alternatives When Jumping to Conclusions. The associative machine only contains the activated ideas. Information that is not retrieved has no bearing on the outcome.
This phrase does not mean that there literally is nothing else, just that your interpretation of the events acts if what you see is all there is. It’s impossible to take into account things you can’t possibly know.
Sometimes you pursue missing information when it’s super important, like when buying a house or making sure you’re not being ripped off. But this bias – the bias of limited evidence – still has a huge bearing on big decisions.
Misappropriate Models
You could measure System 1’s success on how well it makes a good mental model given some facts. The book calls this the “stories” that you make from evidence, though I find that a little harder to understand.
The problem is that the wrong sort of things can inform these models. We don’t take into account the quantity or quality of particular evidence, just the quality of the model (or story) that we can create from it. We can put too much confidence in our models just because they seem like a good fit even if we have too little evidence.
Consequences of WYSIATI
- Overconfidence: we put too much faith in our models and think they’re good for the wrong reason. We don’t allow the posssibility that the evidence crutial to our judgement is missing.
- Framing effects: “90% fat-free” vs “10% fat”, they both mean the same thing but have different connotations on their own. We don’t consider the other view.
- Base-rate neglect: we ignore the base rate (such as the fact there’s a lot more farmers than librarians) because it doesn’t come to mind.
How Judgements Happen
Automatic vs Manual Assesments
System 2 has the capacity to attempt to answer an unlimited amount of questions or evaluate an unlimited amount of attributes:
- You could count the number of letters on this page
- You could make an informed voting decision
- You could work out the most cost efficient ingredients for cooking a meal
- Etc.
The main thing here is could: all of these judgements are manual. There’s two reasons that they don’t take place in System 1:
- Since counting the number of letters on a page has no basis to our survival, we haven’t evolved to do this automatically.
- Some of these assesments require a lot of mental effort, something System 1 isn’t made to do.
For some assesments (like those we have evolved to make), System 1 operates completely automatically and continually asses the situation. We don’t have any concious control over these judgements.
- Is there a threat or major opportunity?
- Should I apporach or avoid?
- How safe do I feel right now?
- Am I in pain?
- Am I hungry?
- Is this person safe?
These are called basic assesments.
Basic assesments are baked into our evolution: they were very useful on the savanah, but can cause problems in today’s society.
Example: Basic Assesment of Strangers
When we see a stranger’s face, we assess two things automatically:
- Dominance, from the shape of their face (think square jaw)
- Intentions, from their facial expressions
This is automatic and cannot be turned off. If you see a man on the street with a strong face and an angry countenance, you feel a sense of threat.
The problem is that these assesments are sometimes inaccurate. A strong jawline doesn’t neccesarily mean dominance, and a smile can be faked (though even an imperfect impression still means a survival advantage).
Basic Assesments and The Halo Effect
This provides more reason for why the Halo Effect exists: in the absence of quality evidence or when System 2 is being lazy, a person will fall back on a simpler assesment that was performed automatically by System 1.
For example, if you’re voting for a politician and are not informed about their policies, you will fall back on the automatic judgement of how dominant or competent they look.
Sets, Prototypes and Sum-Like Variables
Basic assesments cover a wide range of attributes, but not every kind of attribute is assessed this way.
In particular, System 1 is particulrly good at representing categories by a prototype or set of examples.
-
-------
---
--------
----
------
This is why it’s easy to intuitvely think of the average length of the above lines, which is the “prototype”. But it’s a lot more difficult to think of the total length of all the lines since this isn’t how System 1 represents categories.
This means that System 1 doesn’t do well when making decisions that involve attributes which aren’t easy to conceptualise outside of the sets and prototype framework. These are called sum-like variables.
The length of the lines is a sum-like variable since it involves adding together each attribute seperately rather than just smushing them all into one.
This provides a partial explanation for things like Misappropiate Models since we can create an inaccurate representation of things.
Scope Insensitivity
The disregard for sum-like variables is manifested in Scope Insensitivity: when the valuation of a problem is not judged in relation to its size.
Once upon a time, three groups of subjects were asked how much they would pay to save 2,000 / 20,000 / 200,000 migrating birds from drowning in uncovered oil ponds. The groups respectively answered $80, $78, and $88.
The number of birds made no difference, instead it was the image of a bird covered in oil that gave the basis for the response.
Intensity Matching
System 1 is good at reasoning about the intensity or magnitude of something:
- Happiness: “I feel very happy”
- Height: “She is very tall”
- Crime: “That crime is very bad”
Intensity can match across different attributes. Consider:
Julie read fluently when she was four years old.
We can match the intensity to a different attribute/dimension:
How tall is a man who is as tall as Julie was good at reading? Which crime is severe as Julie is good at reading?
Sometimes this can be harmless, like in the above examples. We “feel” it. It explains why we can rank different punishments, when really it’s kind of subjective.
However, this can pose problems. Consider:
What test mark is as good as Julie is at reading?
If System 2 is being lazy, it will match the intensity of her ability to read to a test mark, which is statisitcally not accurate.
Mental Shotgun
Since System 1’s function is automatic, when System 2 is trying to make a judgement about one specific thing, it will make judgements about lots of things at once.
This is why it’s a shotgun: it’s impossible to aim a shotgun at single point in the same way it’s impossible to “aim” System 1 at a single judgement.
Trying to make an objective judgement about someone will bring up unrelated assesments about related things, like their attractiveness.
Answering an Easier Question
Substituting Questions
In order to generate intuitive answers to complex questions, we tend to subconciously replace the complex question (the target) with a much simpler one (the heuristic). A heuristic is a simple procedure that helps find adaquete answers to difficult questions.
Sometimes System 2 will be concious of the substitution and still endorse it since it provides a good answer. However, since System 2 is lazy and follows the Law of Least Effort, it mostly accepts it and doesn’t further modify it by incorporating other information.
A judgement based on a substitution is correct sometimes, but will be inevitably biased in certain ways.
Concious vs Unconcious Heuristics
Sometimes you can conciously enlist a heurstic in order to aid with problem solving. This means actively something like looking for a simpler problem that you can solve or how can you generalise the current problem.
Unconcious heuristics are heursitics automatically implemented by System 1; they are a consequence of a mental shotgun. The problem is that these unconcious heuristics sometimes have problems.
Examples of Question Substitution
|Target|Heuristic| |-|-| |How much would you contribute to save an endangered species?|How much emotion do I feel when I think of dying dolphins?| |How happy are you with your life these days?|What is my mood right now?| |How popular will the president be six months from now?|How popular is the president right now?|
In all these examples, the target question is a hugely complicated topic. How do you define happiness? What is the current state of politics?
And yet you still have a vague idea thanks to question substitution. It’s a mental shortcut, designed to reduce effort without imposing much hard work on your lazy system 2.
The Impact of Intensity Matching
Intensity matching makes it easy to use answers to the heuristic question to inform the target question.
In “How much would you contribute to save an endangered species?”, you need to convert your emotional response into a numerical amount. Thanks to System 1’s aptitude at having a notion of how “intense” something is, the answer can be carried over.
The Mood Heuristic and Predictable Bias
The two questions:
How happy are you these days? How many dates did you have last month?
Gets different responses to:
How many dates did you have last month? How happy are you these days?
Even though if completely rational people were answering the question then the answers would be identical.
The question substitution here vaguelly turns “How happy are you these days?” into “How happy are you with your love life?”. This is predictably biased in certain ways, since the answer will be overly dependent on a particular part of life.
This is a consequence of WYSIATI, the associative machine is primed for questions about dating, so the evidence presented for how happy you are with your life is evidence about how happy you are with dating. You fail to consider the alternatives and are not aware of your limited evidece: WYSIATI.
The Affect Heuristic
The affect heuristic coan be summarised as when people let their likes and dislikes determine their beliefs about the world.
- Your political preference determines the arguments you find compelling.
- If you like a person, you’re more likely to agree with them.
- If you don’t like a certain food, you’re more likely to see it as being bad for you.
Your emotional attitudes towards things determines your beliefs about benefits and their risks.
Summary of System 1
The quick, dirty, automatic form of thinking.
- Generates impressions, feeling and inclinations
- When System 2 agrees, these become beliefs, attitudes and intentions.
- Operates automatically and quickly with little or no effort and no sense of voluntarily control
- Can be programmed by System 2 to mobilize attention when a particular pattern is detected (like paying special attention to looking for a friend in a crowd)
- Executes skilled responses and generates skilled intutitions after adaequate training (how experts become experts)
- Creates a coherent causal story between activated ideas in associative memory
- Links a sense of cognitive ease to illusions of truth, pleasant feelings and reduces vigilance
- Distinguises the suprising from the normal
- Infers and invents causes and intentions
- Neglects ambiguity and supresses doubt
- Is biased to believe and confirm
- Exaggerates emotional consistency (Halo Effect)
- Focuses on existing evidence and ignores absent evidence (WYSIATI)
- Generates a limited set of basic assessments
- Represents sets by norms and prototypes, does not integrate (or differentiate for that matter)
- Matches intensities across scales
- Computes more than intended (mental shotgun)
- Sometimes substitutes easier question for a difficult one (heuristics)