Thinking, Fast and Slow - Overconfidence
This part of [[Thinking, Fast and Slow, Kahneman]]N covers how we are unreasonably certain in our own beliefs, forgeting to factor in things such as chance when making decisions. It covers the ways in which we overestimate our own understanding.
Mindmap Sections
- Understanding: We overestimate how much we understand and fail to appreciate the effects of chance.
- Validity: Validity primarily comes from a feeling of cognitive ease rather than a judgement of how accurate our beliefs are
- Intuition: Why we can’t trust expert intuition, how actual intuition works.
As interconnected effects of all of these:
- Optimism, the Planning fallacy, all that jazz
- Algorithms and formulas, why they do better
Notes
The Illusion of Understanding
The chapter goes over how some biases and heuristics (such as the Halo Effect) make things seem a lot simpler and regular than they actually are.
This chapter explains this mainly in two ways:
- Hindsight bias: After we know the outcome of something, we can’t imagine it not coming out that way.
- Chance: chance plays a larger role in success and stories than we give it.
Narrative Fallacies
Narrative fallacies are a consequence of us continously trying to make senes of the world in a causal sense. The explanatory stories that people find compelling are simple; concrete rather than abstract; assign a larger role to talent, stupididty, and intentions rather than luck; and focus on a few striking events that did happen rather than the numerous ones that didn’t.
The [[Fundamental Attribution Error]]? is an example of this. You’re much more likely to interpret behavior as general personality traits rather than situational effects, both because it tells a better story and because of WYSIATI; you’re unlikely to factor in the things you cannot see.
Narratives and Hindsight Bias
A compelling narative gives an illusion of inevitability.
- The story of how Google came to be seems compelling and makes their success seem inevitable. In reality, luck played a huge factor. It’s likely there’s loads of other startups with similar stories that never had the luck.
- The story of the 2008 financial crisis seems knowable in hindsight. Many people claim to have known the 2008 financial crisis was inevitable, when they clearly didn’t. If they did, the crisis would’ve been known about.
- The story of 9/11 seems compelling. How did they not notice this clear piece of evidence to say a terrorist attack was inevitable? They don’t have hindsight.
Hindsight bias, or the I-knew-it-all-along effect is the name for this. It’s the common tendency for people to perceive past events as having been more predictable than they actually were.
It comes from the general limitation of the human mind: its imperfect ability to reconstruct past states of knowledge, or beliefs that have changed. It’s near impossible for you to go back to a time before you had a crucial piece of evidence or knew the outcome.
It’s the tendency to revise the history of your beliefs in light of what actually happened.
Outcome Bias
Outcome bias is closely related to hindsight bias. It’s the tendency for people to judge a decision based on its outcome rather than the factors that led to the decision.
For example, if a doctor gives a experimental medication to a child that has a 50% chance of curing a disease and it kills the child, the doctor will be critised. If the treatment succeeds, then he will be praised.
It’s difficult to make objective judgements about the quality of a decision when the outcome of the decision was already known.
It’s different from hindsight bias in that it deals with how you critique decisions whereas hindsight bias deals with how people exaggerate the extent to which they would have predicted the event beforehand now they have the knowledge.
Telling Stories Forgets Luck
When creating a causal, narrative story to describe how something came to be, we don’t focus on luck. This is the reason why we don’t understand regression to the mean: we’re keen to explain random phenonema as the result of mechanical processes.
A book was published that analysed why “favourite stocks” did well. After the book was published, the high up stocks decreased and the low down ones increased, on average. That’s regression towards the mean.
The Halo Effect
The Halo Effect has a big bearing over our illusion of understanding. When evalutating a company, we give the CEO a large consideration as part of the success.
If a company is doing well, we might attribute it to a good CEO. If a company is doing poorly, we will say that it’s due to a bad CEO. The halo effect means we get this relationship completely backward; our impressions of the CEO are guided by the success of the company, not the other way around.
Another example is:
Suppose you consider many pairs of firms. The two firms in each pair are generally similar, but the CEO of one of them is better than the other. How often will you find that the firm with the stronger CEO is the more successful of the two?
The intuitive response is 100% of the time, of course a better CEO would mean a better company. When adjusting your estimate using a correlation of 30%, you would get an answer closer to the CEO only 60% of the time.
The Illusion of Validity
This chapter is about our overconfidence in how valid our own beliefs are. This arises from the fact that subjective confidence in a judgement is not a reasoned evalutation of the probability that the judgement is correct. Instead, confidence comes from a feeling that reflects the coherence of the information available and the cognitive ease of processing it.
This is systematically biased in certain ways. The confidence we have doesn’t take into account the amount of evidence or the quality of the evidence. Poor evidence can make a very good story.
If someone is highly confident about the beliefs, it often means that they have constructed a coherent story in their mind and not necceasrily that the story is true.
Misplaced Confidence
Soldier Evaluation Example
One example in the book is that the author used to evaluate potential officers for the Israeli army by observing teamwork exercises. He would note which ones took charge or which ones reacted badly and used that to make a reasoned judgement about how they would perform later on.
When reviewing the candidates later, it appeared that the rankings given were little better than random guesses.
This should mean that he no longer felt confident in his beliefs when choosing officers in the next round of training. This was not the case. Despite the truth about the quality of preditions, he continued to feel and act as if the specific predictions he made were accurate. He didn’t even moderate his beliefs afterwards.
This was an example of the representativeness heuristic creating a non-regressive belief – he failed to consider the extent to which a positive performance in training correlated with being an effected officer.
Stock Picking Example
If the [[Efficient Market Hypothesis]]N says that it’s impossible to beat the market except by chance or with a unique edge, then why do people continue to attempt to pick stocks?
A stock’s price incorportates all the availableknowledge about the value of the company and the best predictions about the future of the stock. If some people think the company will be worth more tomorrow, they will buy the stock today. The opposite is true: if people think the company will be worth less tomorrow, they will sell the stock.
In an efficient market, you therefore cannot expect to either to gain or to lose by trading.
Several analyses have been done that show the large majority of investing is based on chance instead of skill.
One study showed that the most active traders often perform the worst, and holding onto the same investment for a larger amount of time would’ve been more effective. This is backed up by the fact that hedge funds often perform worse than the average market return, and the year-to-year correlation of success is close to zero. It’s only a little better than a shot in the dark.
The availability heuristic also informs investors on what companies to focus on. This means people flock to companies which are in the news. Some expert financial professions use this to their advantage and take moeny away from single investors.
Ugh Fields?
One explanation for misplaced confidence for big problems like financial investing is Ugh fields (I think). Facts that challenge basic assumptions and threatens people’s livelhood and self esteem aren’t absorbed.
Financial investors don’t want to face the fact that they do worse than the average.
Pundits
A pundit is an expert in a field who offers their opinion to mass media. Think political analysts, sports commentators or high-level financial advisors.
The tendency to accept coherent narratives of the past contributes to our failure to recognise the limits of our forecasting ability. This is hindsight bias affecting the confidence we place in our future decision making.
The idea that luck shapes history much more than we think it does is shocking. If history is mainly shaped by luck, then how is it possible for “experts” to be correct most of the time.
The reality is that they’re not. In a huge study that gathered more than 80,000 predictions, experts did worse than if they’d just assigned an equal probability to all possbile outcomes. Experts were not significantly better than nonspecialists.
Furthermore, experts/pundits who were “in demand” had more outrageous and overconfident predictions.
Experts develop an enhanced illusion of skill and become unrealisticially confident.
Hedgehog vs Foxes
Even when experts are wrong, they rationalise their decisions.
One metaphor is of hedgehogs and foxes:
- Hedgehogs “know one big thing”. They have a coherent framework and theory with which they view the world, are confident in their beliefs and are impatient towards those that don’t agree with them. They’re reluctant to admit error; a failed prediction is always “off only on timing” or “very nearly right”. They are opinionated and clear, which is what the media wants to see.
- Foxes are complex thinkers. They don’t believe that the world can easily be explained by one theory, and recognize that reality emerges from the random interactions of many different agents and forces, some of which are blind luck. They don’t get invited to television debates. They also don’t have friends. That last bit isn’t true.
Many experts are hedgehogs.
I feel like I’m not grasping this quite right… is it really true that someone who studies politics for 10 years won’t make better predictions? I doubt if you told an expert that all their knowledge was worthless, you’d get a good response. Hmmm.
Predictions are Hard
Errors of prediction are inevitable because the world is unpredictable. Randomness is more prevelant then we like to imageine.
Short-term trends can be much more easily forecase than long-term ones. It’s like chaos theory: a small random event can cascade into the future and throw off all informed predictions.
Intuitions vs Formulas
This chapter brings to light the fact that in most cases, simple formulas do much better at expert intuitions in predicting long-term outcomes.
Intuition Vs Formula Examples
Infant Deaths
Infant mortality can be prevented more by using a simple formula rather than relying on the intuitions of clincal staff.
The formula combines 5 variables, such as actiity, pulse and appearance, with the final score being the total of these 5 variables.
By applying this score, delivery room staff had a consistent way of ensuring that the babies which needed attention got the attention that they needed. This prevented infant deaths.
Wine Prices
Some rich people “invest” in wine, buying a bottle in the hope that it will go up substantially in price as it ages. In order to gauge how much they’re willing to invest, they will taste the wine.
Someone came up with a simple formula that incorporated 3 facts: the amount of rain during the harvest, the average temperature and the total rainfall during the previous winter.
Despite pushback from wine fans, this formula correlated with the price by a factor of $0.9$.
Army Interviews
The normal Israeli interview process was to conduct a 30 minute interview which covered a variety of topics and to form a general impression of how well the recruit would do in the army.
Kahneman proposed a process where 6 relevant variables (rated subjectively) were considered. There was pushback from the interviewers and so he comprimised by letting them also note down an intutive judgement at the same time.
The formulaic approach did moderately well opposed to the previous system in which there was little to no correlation between the initial predictions and the final outcome.
The intuitive judgement was also good, but Kahneman argues that it was only after the process of collecting the data for the algorithm that the intuitive judgement was well-informed enough to create an accurate score. It’s like they ran a version of the formula in their heads.
Meehl Patterns
Paul Meehl was a researcher who analysed whether clinical predictions were better than statisical predictions. The answer was no. This is clear in the examples, in which a simple, statistical combination of variables leads to better predictions than experts such as psychians.
Examples cover everything, statisitcal predictions beat clinical predictions in loads of different areas:
- Longevity of cancer patients
- Length of hospital stays
- Diagnosis of cariac disease
- Susceptibility of babies to sudden infant death sydrome
- Economic measures
- Evaluation of credit risks
- Future career satisfaction
The longevity of cancer patients involves considering things over a considerable amount of time, which is non-trivial because of all the different factors and uncertainties involved.
In some cases, even if the experts were given the formula, they still did worse than the formula. This is because they were cleverly trying to incorporate new information.
High vs Low Validity Environments
If a situation is very turbulent that involves a significant degree of uncertainity and unpredictability, this is called a “low-validity enivronment”. This is where intuition does poorly, for the same reason it’s difficult for experts to make big judgements that span large timeframes. Formulas win here.
The opposite, a high-validity environment is where intuition is useful. This often involves short amounts of time. A therapist’s judgement about a patient responding to a certain type of treatment is short term, so more likely to be accurate.
Why Do Algorithms Win?
There’s a few arguments here:
- Experts try think outside of the box and try to be clever. Regression to the mean indicates that clever things work worse than the average solution.
- Experts are inconsistent. In one study, X-ray cardiologists contradicted themselves 20% of the time. How are unreliable judgements supposed to be correct?
- This might be due to a highly volatile System 1. The argument in the book is that priming shows that small stimuli can have a large bearing on decisions, so expert judgements are affected by little things. Since priming was largely debunked and this might be just trying to find a causal story, I’m not sure I completely buy this argument. I’m probably wrong though… this guy does have a Nobel Prize.
- Statistical analysis finds the best possible combination of all the variables to fit the data. It makes sense that the “optimal model” would be better than the sub-optimal model we create in our heads.
Algorithms are a No Fun Zone
There’s a lot of push back against adopting formulas to make judgements even though they are much better. The story of a child dying because an algorithm made a mistake is more poignant than the story of the same tragedy that occurs because of human error.
It reminds me of The value of a life, specifically the button example.
Imagine a button which, when pressed, picks a random number between 1 and a million. If that number is 1, it kills a randomly selected person. How much would somebody have to pay you to press that button?
10 dollars. That will save the most lives. This mechanical, cold formula seems wrong. The aversion to algorithms is because our sympathies always lie with the human, we have a natural bias to the natural over the unnatrual.
This attitude is reflected in the description of statistical methods by clinicians:
mechanical, atomistic, additive, cut and dried, artificial, unreal, arbitrary, incomplete, dead, pedantic, fractionated, trivial, forced, static, superficial, rigid, sterile, academic, pseudoscientific and blind
It’s hard to do the best thing.
Trusting Expert Intuition
There are several opponents to Daniel Kahneman’s theories about expert intuition. One of these groups follow what is called NDM (naturilistic decision making). Proponents of this theory argue that expert intuition is often valid and are skeptical about replacing highly skilled people with simple formulas.
This chapter is about where we can draw the line between trusting experts and not trusting their judgements.
Intuition as Recognition
Early on in the book, there is an example of a firefighter than instinctively feels like they should get out of a certain room. After they safely do, the floor collapses in the room they were just standing in.
If expert intuition doesn’t work well, they why did it save lives in this case?
The regonition-primed decision model is an explanation of how this works. It suggests intuition comes from two seperate steps which involve System 1 and System 2.
- A tentative (not certain) plan or judgement comes into mind due to associative activation in System 1.
- An evalutation of the plan or judgement by System 2.
A quote from Herbert Simon about expert intuition:
The situation has provided a cue; this cue has given the expert access to information stored in memory, and the information provides the answer. Intuition is nothing more and nothing less than recognition.
This statement reduces the magic of intuition to the regular experience of memory. When a child looks at a dog and says “doggy”, they’re exercising intuition in the same way that a chess grandmaster does.
Learning Intuition
One way in which we learn intuition is through emotion. Like Pavlov’s dogs that learn to associate the ring of a bell with food, we learn intuition through the repeated association of certain events with positive or negative emotion (I think, “neurons that fire together wire together” or something like that?).
This learning by fear crops up in some areas of expertise. For firefighters, the threat of danger is enough of a motivator to learn the telltale signs that something is going wrong.
Most types of expert intuition, such as becoming a chess grandmaster or being a professional basketball player, will work slightly differently. It’s through repeated practice that spans many different scenarios that allows complex associative responses to occur.
In the book, he compares learning a high-level skill like chess mastery to learning to read. At the start of the process, you start by learning to recognise small units like letters or syllables before working up to words and eventually clauses. Experienced readers also gain the ability to dissassemble new patterns into familiar elements that they can understand – ‘feeling’ what a new word means from the etymological connections that have formed in their brains.
It is much the same for chess; after several thousand hours of practice, you can “read” chess configurations and deal with new situations they have never encountered.
This links to [[Augmenting Long-term Memory, Nielsen]]N. Notice how the “chunks” at each stage get bigger: letters, syllabels, words and phrases. It is the same for chess, with individual pieces, groups of pieces and finally whole configurations.
The Effect of the Environment
High Validity and Low Validity
This is what I was trying to get at with High vs Low Validity Environments.
Daniel Kahneman focuses on clinicians, stock pickers, and political scientists and critiques their ability to make accurate long-term forecasts.
Supporters of NDM focuses on firefighters, chess grandmasters and similar professions.
The main difference here is the environment in which the intuitions are being used.
A highly-valid environment is one in which there is valid cues that the expert’s System 1 has learned to use. Chess games are highly-valid environments since it’s an ordered game .
A highly-invalid environment is one in which there are hardly any valid cues that will affect the judgement. Stock pickers are in a low validity environment since chance and many other factors contribute to the basic unpredictability of the environment.
Feedback and Practice
Some skills are easier to learn than others.
Learning to use the breaks on a car is much easier than learning to use the breaks on a boat since there is immediate feedback everytime you go around a bend. In a boat, you don’t feel the effects of your actions until a while afterwards.
Whether professionals have the chance to develop intuitive expertise depends essentially on the quality and speed of feedback, as well as the opportunity to practice.
I think this is where I’m personally trying to go with [[Sergeant]]?. Typical practice of questions is irregular and feedback often comes much later, like once the teacher has marked the work a week later. In Sergeant, the feedback loops are longer and it offers an environment to practice the application of knowledge in a much more regular way. By making
Pseudo-Experts
The problem is that people mistake low validity environments for high validity environments due to heuristics and biases. Since confidence in a belief comes primarily from the coherence and ease with which the belief is constructed, it’s easy to have the illusion of validity around your own beliefs.
This is made worse by people extrapolating their own skills over the short term to the long term. A psychotherapist might predict what the patient is about to say next, and assume that this ability carries over to their abilities to make long term predictions.
WYSTIATI also applies here. People don’t consider the fact that there might be other factors or evidence (such as luck) at play. The associative machine supresses doubt.
Experts perform poorly in these types of situations because they are assigned tasks that don’t have a simple solution. Algorithms ofter do better in these cases because they are more likely to detect weakly valid cues and much more likely to maintain a modest level of accuracy by using such cues consistently.
Since low-validity environments are hard, heuristics are used. “How do I feel about the prospects of this company?” becomes “How do I feel about this company right now?”.
Due to an illusion of confidence, some experts think they know more than they actually do and think that their predictions are better than they actually are.
An environment that fosters expert intuition is therefore:
- is sufficiently regular to be predicatable: the associative machinery of System 1 can learn to recognise patterns and regularities
- has opportuntities to learn these regularities through prolonged practice: There’s instant feedback for learning these regularities
The Planning Fallacy
This was one of the first rationality things I took notes on… [[The Planning Fallacy]]N. It’s nice to come back to it with a bit more of a solid foundation.
Simply put, the planning fallacy is the tendency for people to make overly optimistic plans, underestimating time and resources while overestimating benefits. People are more inclined to imagine scenarios of success while overlooking the potential for mistakes and miscalculations.
The planning fallacy creates plans which:
- are unrealistically close to best-case scenarios
- could be improved by consulting the statistics of similar cases
Planning Fallacy Examples
New Curriculum
The author’s main experience with the planning fallacy is when he was in charge of creating a new decision-based curriculum for Israeli education.
The process was going well and they had about 2 chapters of a new textbook written. When he asked people independently for their estimates of how long it would take (avoiding letting people be anchored by the first response and making use of all available information) and the estimate was around 2 years.
When he asked the curriculum expert how long it had taken similar groups in the past, he said it was 7 years. He further went on to say that only 40% of them succeeded, and they were below average.
This illustrates the two views: the inside view, in which the current success was optimisitcally extrapolated into the future, and the outside view, where the success was compared to broadly similar tasks in the past.
The book was eventually completed eight years later.
Scottish Parliment
In July 1997, the proposed new Scottish parliment building had an estimate of 40 million pounds.
In 2004, the project was finished and it had cost 431 million.
Didn’t go quite to plan.
Kitchen Renovations
Kitchen renovators expeced the job to cost 18,658 dollars on average. In reality, the ended up paying an average of 38,769 dollars. Poor predictions are still made despite all the evidence.
The Inside View
The inside view is where you focus on your specific circumstances, consider the obstacles you have, constructing scenarios of future progress and extrapolating current trends. This is biased in predictable ways.
Things such as optimism bias means you don’t give enough weight to obstacles and WYSIATI means that you don’t consider “unknown unknowns”. You also don’t account for the accumulation of chance of something going wrong as time progresses.
There are many ways for a plan to fail, and although most of them are too improbable to be anticipated, the likelihook that something will go wrong in a big project is hight.
The Outside View
The outside view (or reference class forecasting) is different approach. It essentially ignores the case at hand and involves no careful prediction of the success of the plan based on the current cirumstances. Instead, it focuses on the statistics of “reference classes”, broadly similar projects that have occured in the past.
In the concluding remarks of [[Thinking, Fast and Slow - Heuristics and Biases]]N, a framework is specified for how to adjust intuitive reasoning to better reflect reality. It went something like this:
- Make your intuitive guess
- Consider the baseline
- Consider the correlation between the factors
- Make your guess by moving from the baseline towards to intuitive guess, weighted by the correlation
The outside view is doing the same thing for planning. The “base rate” in this case is the success of similar projects in the past. More specifically, it looks something like:
- Identify an appropriate reference class, such as kitchen renovations, large railway projects, etc.
- Obtain the statistics of the reference class and use the statistics to generate a baseline prediction
- Use the specific information about the case to adjust the baseline prediction
You can get better at planning without statistics though if you just consider unknown unknowns and work that into your plan.
Taking the Outside View is Difficult
An illusion of confidence further makes it hard to accept the outside view. The intuitive plan, based not on data but heuristics, comes with a feeling of cognitive ease.
Considering the fact that it might take a lot longer than you anticipate is not fun, so you may subconciously ignore it (Ugh fields again?).
It’s not entirely because of emotional neglection though. Information about a specific case is given a lot more weight (the representativeness heuristic) and ignores things like regression tot he mean.
The outside view is unnatural. It’s much easier to ignore the planning fallacy than to ignore the bad news that your plan likely will not work.
Malicious Errors in Planning
Not all budget underestimation is due to the planning fallacy. It’s a well recognised fact that contractors routinely make much of their money from additions to the original plan.
Furthermore, some people actively underestimate budgets in order to get them improved. No-one wants to approve a 8 year long project that only has a 40% chance of success. The planning fallacy here lies with the decision makers who have to approve plans.
Overconfidence and Capitalism
The actual title of this chapter is “The Engine of Capitalism”.
This chapter gives some interpretations of capitlistic behaviour in the context of overcondidence and optimism. It explains the reasons for biases like the planning fallacy in the context of a more general optimism bias.
Optimism
Optimism is a double-edged sword.
On one hand, optimistic people are happier and more popular thanks to their general positive outlook. They are resilient in adapating to failures and hardships, persistance in the face of difficult obstacles.
Furthermore, optimism is also better for mental health in some cases. If you make excuses like “She was an awful woman” instead of “I’m an inept salesman” when a woman slams a door in your face, you can protect your sense of self. (Though see Have no excuses). You’re more likely to a be a successful scientist if you believe that what you’re doing has purpose.
On the other hand, optimisic people are more likely to take excessive risks which sets them up for greater losses. An unlucky entrepreneur might end up doubling their losses by pursuing their dream. Indeed, in a study of a company which evaluated inventions, 47% of individuals didn’t back down after their inventions were rated as poorly fit for market which made them lose a much more significant amount of their money.
Is this a bad thing? People get to where they are by seeking challenges and taking risks. They are talented and lucky. If no-one pursued risky goals, the world wouldn’t look like it does today.
Optimism bias plays a signficant role in whether individuals or institutions voluntarily take on significant risks. Sometimes the optimism is misplaced – based on an illusion of understanding or validity. These types of biases can be dangerous, leading to a sort of entrepreneurial delusion.
Taking an outside view here can help.
Optimisic risk taking definitely contributes the the “enconomic dynamism” of a capatilist society even if most risk takers end up disappointed.
Entrepreunurial Optimism in Terms of Cognitive Biases
- We focus on our goal, anchor on our plan and neglect relevant base rates, exposing outselves to the planning fallacy.
- We focuse on what we want to do and can do, neglecting the plans and skills of others.
- Both in explaining the past and in predicting the future, we focus on the causual role of skill and neglect the role of luck. We are prone to an illusion of control.
- We focus on what we know and neglect what we do not know (WYSIATI) which makes us overly confident in our beliefs.
Social Pressures for Optimism and Overconfidence
CFOs couldn’t predict the returns from the S&P 500 index as it’s a low-validity environment. They failed to recognise this and even put very tight confidence intervals on their beliefs. This is overconfidence.
CFOs have to be overconfident due to social pressures. If a CFO said that the S&P 500 could go between -10% and 30% returns, they wouldn’t be thought of as a very good CFO.
As another example, clinicians who say they are only 50% certain instead of completely certain aren’t considered very good, even if the 50% certainty is more accurate. It’s considered a weakness and a sign of vulnerability for clinicans to appear unsure.
Furthermore, the media favours experts who have strongly held beliefs.
All the social and ecomoic pressures favour overconfidence.
Reasoning Under Uncertainity and Combating Optimism
Inadequate appreciation of the uncertainty of the environment leads people to take risks they should avoid. Appreciation of uncertainty is a cornerstone of rationality, but it’s not what organisations want.
Gary Klein, the opponent to the heuristics and biases that studied the efficacy of expert intuition with the author, suggests a technique to combat optimism in organisations:
When the organization has almost come to an important decision but has not formally committed itself, Klein proposes gathering for a brief session a group of individuals who are knowledgeable about the decision. The premise of the session is a short speech: “Imagine that we are a year into the future. We implemeneted the plan as it now exists. The outcome was a disaster. Please take 5 to 10 minutes to write a brief history of that disaster.”
This is called a premortem. It is effective for several reasons:
- It overcomes groupthink, alternate suggestions are allowed to be made.
- It combats WYSIATI, you consider what you can’t possibly know
- It legitimises doubts; typically doubt in an organisation’s plan is treated as flawed loyalty to the team and its leaders.