Why are we overconfident in our predictions?

Illusion of Validity

, explained.
Bias

What is the Illusion of Validity?

The illusion of validity is a cognitive bias that describes our tendency to be overconfident in the accuracy of our judgements, specifically in our interpretations and predictions regarding a given data set.

Where this bias occurs

Teachers often believe that they can accurately predict how well a student will do in their course based on their past performance at the school. While a teacher might be very confident in their predictions about a certain student, there may be things going on behind the scenes that lead to drastically different outcomes. For example, if a student with a pristine record begins to develop symptoms of a mental illness, such as depression, or suddenly finds themselves in the midst of their parents’ messy divorce, they may no longer make straight-As, and may even begin to act out. On the other hand, a student who typically gets lower grades may realize that the college program they want to get into is quite competitive, and therefore decide to pull their act together and double-down on their studying. In both cases, a teacher might have predicted the student’s performance in their class based on a pattern that ended up wrong.

Individual effects

In order to cope with the unpredictability of the world in which we live, we construct narratives that provide a coherent explanation for random occurrences.1 We fill in the gaps as needed, inferring causes and consequences from the information we are given. The less information we have, the easier it is to put together a satisfying story, which can lead us to believe that we know more than we actually do. Somewhere along the way, we start to accept the inferences we made as factual. Our confidence in our ability to know the unknowable can blind us to our own ignorance.

Our predictions often impact the decisions we make. When we feel particularly confident in a prediction, we may be more inclined to make important decisions based on it. Since our predictions often prove to be inaccurate, this can have unfortunate repercussions.

Systemic effects

Overconfidence is an undesirable trait, as its consequences often have a wide scope. This is particularly true when it is possessed by people in positions of power, such as politicians, who are required to make important decisions that affect the lives of many others.

Why it happens

Different heuristics and cognitive biases are often intertwined. In this case, the representativeness heuristic, base rate fallacy, and confirmation bias have all been cited as possible causes of the illusion of validity.

Representativeness heuristic

When attempting to make a prediction in uncertain circumstances, we often rely on the representativeness heuristic. A heuristic is a mental shortcut used to facilitate decision-making. These rules-of-thumb are generalizations, that often lead to stereotyped, or even inaccurate, conclusions. The representativeness heuristic specifically refers to how we view events and objects in relation to a category prototype, or representation. We assess the similarity between the object or event in question and the prototypical example and, the more similar they are, the more confident we are in our ability to make a prediction about the likelihood of a certain outcome.

In their paper “On the Psychology of Prediction”,2 Daniel Kahneman and Amos Tversky placed significant focus on the role played by representativeness in prediction. In fact, they went so far as to state that “the thesis of this paper is that people predict by representativeness, that is, they select or order outcomes by the degree to which the outcomes represent the essential features of the evidence” (p. 237-238).

The representativeness heuristic therefore underlies the illusion of validity. The more representative we feel a certain outcome is of the evidence we have been provided, the more confident we will be in our prediction that said outcome will occur. However, as Kahneman and Tversky point out, certain factors can affect the probability of that outcome occurring, without altering its representativeness.3 As such, we can end up confidently making predictions that turn out to be wildly incorrect.

Base rate fallacy

It’s clear that we’re not as good at making predictions as we think we are. Base rate fallacy is another reason why we frequently miss the mark. This fallacy describes how we tend to ignore base rate information in favor of individuating information. Base rate information refers to objective information; for example, 20% of female adolescents and 6.8% of male adolescents experienced at least one major depressive episode in 20174. Individuating information is specific to a particular person.

Take Jane as a first example. She is a fifteen-year-old straight-A student, who is student council president, volleyball team captain, and is known for being incredibly cheerful and sociable. Another example of individuating information would be to describe a boy named John as a student struggling to get through high school, who tends to keep to himself, and whose parents are going through a divorce. When given both base rate information about adolescent depression and individuating information about these two people, we tend to ignore the former and base our predictions on the latter.

In this case, if asked to predict which of these two students is more likely to experience a major depressive episode, people might be more likely to respond with John than with Jane. They might be inclined to say that it’s very unlikely that Jane would experience a major depressive episode, since she is so accomplished and seems to have a very sunny disposition, while John seems to be struggling more and to be more reserved. However, this ignores the fact that adolescent females are significantly more likely to experience a major depressive episode in a given year than are adolescent boys.

The illusion of validity results from base rate fallacy because individuating information makes us confident in our predictions but, because we tend to ignore base rate information, these predictions are often inaccurate.

Confirmation bias

Another factor that gives rise to the illusion of validity is confirmation bias. This cognitive bias describes how we often selectively attend to evidence that supports our existing beliefs and to ignore evidence that contradicts them. We do this partially because it is a mental shortcut that allows us to allocate mental energy to other tasks and also because it boosts our self-esteem by confirming our stance on the topic at hand.

Confirmation bias can lead to the illusion of validity because cherry-picking specific information to support our prediction increases our confidence in it. When we are able to back up our beliefs with specific evidence, we feel more sure of them. In the case of confirmation bias, however, we ignore essential information that would otherwise demonstrate that we should not be so certain of our predictions.

Why it is important

It’s important that we understand the illusion of validity so that we can learn to reign in our confidence when making predictions. It’s perfectly alright to predict that something will occur, but to act that it’s a certainty can have serious implications. For example, the self-fulfilling prophecy refers to how we can inadvertently behave in a manner that gives rise to an outcome that adheres to our expectations.

If a teacher believes that a student in their class will perform poorly, based on past behavior, the way the teacher treats the student could lead them to get low grades, even if the student had every intention of improving. Other times, the illusion of validity can cause us to have our expectations completely subverted. It’s better to remain open to the possibility that our predictions could be wrong and to consider that there may be variables contributing to the outcome that we know nothing about.

How to avoid it

Unfortunately, being aware of the illusion of validity is not sufficient in overcoming it. Even someone who knows that their source of information is unreliable may feel extremely confident in their prediction of a certain outcome.5 Avoiding the illusion of validity requires significant critical thinking. We must first evaluate the evidence we have been given and ask ourselves if there are any other factors that could influence the outcome that we might be unaware of. It is also necessary to analyze the data effortfully and not simply glean meaning from any patterns that might stick out to you upon first glance.

These steps are useful, but they still do not guarantee that our predictions will be entirely accurate. This makes it important to keep in mind that the expected outcome might not always be the actual outcome. Coming up with alternate predictions or developing contingency plans can help you prepare for different outcomes than the one you predicted and prevent you from being blindsided should your prediction prove incorrect. Remember, it’s always good to keep an open mind.

How it all started

Kahneman and Tversky introduced the concept of illusion of validity in their 1973 paper “On the Psychology of Prediction”.6 They defined the illusion of validity as the tendency “to experience much confidence in highly fallible judgments”(p. 249). Through a series of studies, they demonstrated that consistent data and extreme scores generally increase our confidence in our predictions, even though they are frequently negatively correlated with predictive accuracy.

One of these studies asked participants to predict the grade point averages (GPAs) of students based on their scores on two pairs of aptitude tests. The first pair of tests assessed creative thinking and symbolic ability, and the scores were highly correlated with one another; for example, the grades on both tests were Bs. The second pair of tests assessed mental flexibility and systematic reasoning, and the scores were not correlated, such that the grade on one test was an A and the grade on the other was a C. They were then told that “all tests were found equally successful in predicting college performance.”

However, because consistency promotes confidence, participants assigned greater weight to the correlated scores in making their predictions. This is due to the representativeness heuristic because, the more consistent the data, the more representative the predicted score seems. In this case, participants disregarded important information because they deemed it inconsistent and therefore unreliable – despite the fact that they were assured of its predictive value. Thus, greater consistency both increases confidence and decreases validity, leading to the bias these researchers termed the illusion of validity.7

A second study of theirs supported their hypothesis that we tend to have greater confidence in making extreme predictions, as well. The participants were far more confident in predictions about extreme behavior, than they were in predictions about mediocre behavior, despite the fact that extreme behavior is far less likely to occur.8

Example 1 - Israeli army

Kahneman drew from his experience as a psychologist for the Israeli army in order to explain the illusion of validity in the foundational paper he penned alongside Tversky. He recounted his discovery in his 2011 interview with the New York Times, “Don’t Blink! The Hazards of Confidence.”9 During his time with the army, he was tasked with assessing candidates for officer training. One test used to do so was the leaderless group challenge, in which a group of soldiers was tasked with completing an obstacle course. Kahneman and his colleagues took note of the soldiers’ leadership qualities and observed how they worked in a team. From this, they predicted how the candidates would perform at officer-training school. Kahneman put it bluntly, stating that “our ability to predict performance at the school was negligible. Our predictions were better than blind guesses, but not by much”. Yet, this knowledge did not stop them from repeating the same process, nor did it diminish their confidence in their predictions. Kahneman was fascinated by this realization. This led him to coin the term “illusion of validity”, as he pointed out that their confidence in their ability to predict the soldiers’ future performance at the school was unaffected by the statistical evidence that their predictions were inaccurate.

Example 2 - Infant brain damage

The illusion of validity is also present in the medical field, as illustrated in a 2017 paper by F. Gilles, P. Gressens, O. Dammann, and A. Leviton titled “Hypoxia–ischemia is not an antecedent of most preterm brain damage: the illusion of validity”.10 Brain damage in premature infants has been termed hypoxic-ischemic encephalopathy, a label that is applied even when there is no evidence of hypoxia, which occurs when there is an insufficient oxygen supply. The authors concede that hypoxia can result in some brain abnormalities but posit that the majority of abnormalities result from multiple factors, making the name misleading. They believe that the heuristic approach used by doctors to evaluate probabilities and make predictions can lead to the illusion of validity – overconfidence in the accuracy of these assessments. The example they give is that referring to preterm brain damage as hypoxic-ischemic encephalopathy can result in over-simplification of the condition, which can limit future research and prevent the field from progressing its understanding of the condition. The name increases confidence in the likelihood that hypoxia is the underlying cause of the brain abnormalities, even though that is not necessarily the case. The authors suggest renaming the condition neonatal or newborn encephalopathy, in order to accommodate its other possible causes.

Summary

What it is

The illusion of validity illustrates how our confidence in our judgments is often overstated. The interpretations and predictions we make when analyzing a data set are often less accurate than we believe them to be.

Why it happens

The illusion of validity can be chalked up to other biases and heuristics, such as confirmation bias, the representativeness heuristic, and base rate fallacy.

Example 1 – Israeli army

During his time as a psychologist for the Israeli army, Kahneman and his colleagues used a test to assess soldiers’ leadership abilities, which they believed would allow them to accurately predict the soldiers’ performance at officer-training school. Despite the fact that their predictions proved incorrect, they continued to use the test and to confidently make predictions based on it.

Example 2 – Causes of brain damage in preterm infants

Brain damage in premature infants is referred to as hypoxic-ischemic encephalopathy, even in the absence of evidence of hypoxia, which is a lack of oxygen. It has been suggested that this name leads to the illusion of validity, by bolstering medical professionals’ confidence in hypoxia as the cause, which could potentially limit the study of other possible causes.

How to avoid it

The illusion of validity is difficult to avoid, because even people who are aware of the bias and who know that the information they have been given isn’t sufficient to make an accurate prediction are often still affected by it. By analyzing the information available to you and considering what evidence you might be missing, in addition to considering possible alternate outcomes, you can start to overcome it.

Sources

  1. Penn, A. (2019). Illusion of Validity: Think You Make Good Predictions? Shortform. https://www.shortform.com/blog/illusion-of-validity/
  2. Kahneman D. and Tversky, A. (1973). On the Psychology of Prediction. Psychology Review. 80(4), 237-251. doi: 10.1037/h0034747
  3. See 2
  4. “Major Depression”. The National Institute of Mental Health. https://www.nimh.nih.gov/health/statistics/major-depression.shtml
  5. See 2
  6. See 2
  7. See 2
  8. See 2
  9. Kahneman, Daniel (2011). “Don’t Blink! The Hazards of Confidence”. The New York Timeshttps://www.nytimes.com/2011/10/23/magazine/dont-blink-the-hazards-of-confidence.html
  10. Gilles, F., Gressens, P., Dammann, O., & Leviton, A. (2018). Hypoxia-ischemia is not an antecedent of most preterm brain damage: the illusion of validity. Developmental Medicine and Child Neurology60(2), 120–125. doi: 10.1111/dmcn.13483

About the Authors

A man in a blue, striped shirt smiles while standing indoors, surrounded by green plants and modern office decor.

Dan Pilat

Dan is a Co-Founder and Managing Director at The Decision Lab. He is a bestselling author of Intention - a book he wrote with Wiley on the mindful application of behavioral science in organizations. Dan has a background in organizational decision making, with a BComm in Decision & Information Systems from McGill University. He has worked on enterprise-level behavioral architecture at TD Securities and BMO Capital Markets, where he advised management on the implementation of systems processing billions of dollars per week. Driven by an appetite for the latest in technology, Dan created a course on business intelligence and lectured at McGill University, and has applied behavioral science to topics such as augmented and virtual reality.

A smiling man stands in an office, wearing a dark blazer and black shirt, with plants and glass-walled rooms in the background.

Dr. Sekoul Krastev

Sekoul is a Co-Founder and Managing Director at The Decision Lab. He is a bestselling author of Intention - a book he wrote with Wiley on the mindful application of behavioral science in organizations. A decision scientist with a PhD in Decision Neuroscience from McGill University, Sekoul's work has been featured in peer-reviewed journals and has been presented at conferences around the world. Sekoul previously advised management on innovation and engagement strategy at The Boston Consulting Group as well as on online media strategy at Google. He has a deep interest in the applications of behavioral science to new technology and has published on these topics in places such as the Huffington Post and Strategy & Business.

About us

We are the leading applied research & innovation consultancy

Our insights are leveraged by the most ambitious organizations

Image

I was blown away with their application and translation of behavioral science into practice. They took a very complex ecosystem and created a series of interventions using an innovative mix of the latest research and creative client co-creation. I was so impressed at the final product they created, which was hugely comprehensive despite the large scope of the client being of the world's most far-reaching and best known consumer brands. I'm excited to see what we can create together in the future.

Heather McKee

BEHAVIORAL SCIENTIST

GLOBAL COFFEEHOUSE CHAIN PROJECT

OUR CLIENT SUCCESS

$0M

Annual Revenue Increase

By launching a behavioral science practice at the core of the organization, we helped one of the largest insurers in North America realize $30M increase in annual revenue.

0%

Increase in Monthly Users

By redesigning North America's first national digital platform for mental health, we achieved a 52% lift in monthly users and an 83% improvement on clinical assessment.

0%

Reduction In Design Time

By designing a new process and getting buy-in from the C-Suite team, we helped one of the largest smartphone manufacturers in the world reduce software design time by 75%.

0%

Reduction in Client Drop-Off

By implementing targeted nudges based on proactive interventions, we reduced drop-off rates for 450,000 clients belonging to USA's oldest debt consolidation organizations by 46%

Notes illustration

Eager to learn about how behavioral science can help your organization?