Why do we judge decisions by results alone?

The 

Outcome bias

, explained.
Bias

What is Outcome Bias?

Outcome bias is the tendency to evaluate a decision based on how it turned out rather than on the quality of the decision process, given the limited information available at the time. Building on early work on hindsight bias, where retrospective analysis makes the end result seem like it was obvious all along,1 outcome bias focuses on how we judge the choice itself. When a bet pays off, we call it smart. When it fails, we call it foolish, even if the reasoning and evidence were identical. The fact is, many outcomes are driven by luck, noise, and hidden variables. If we always equate bad outcomes with bad decisions, we punish prudent risk-taking, reward reckless gambles that happen to work, and make it harder for teams to learn from near-misses and close calls.

Where this bias occurs

Imagine a product lead who must decide whether to launch a feature before the holiday season. The data are incomplete, but there is a thoughtful risk assessment, clear hypotheses, and a rollout plan. The team launches. If the feature lifts revenue, colleagues describe the choice as bold and strategic. If it triggers instability and churn, the same analysis is labeled careless. Most postmortems are written as if the outcome was the only true measure of wisdom.

Outcome bias shows up anywhere decisions are evaluated after the fact, which covers a large share of modern work:

  • Clinical and safety decisions. In healthcare, aviation, and other safety-critical domains, teams are judged on whether harm occurred and whether their choices matched good practice under uncertainty.3
  • Legal and regulatory contexts. Courts and oversight bodies often decide whether someone was negligent after they know how much damage a decision caused.
  • Leadership and performance reviews. Managers rate people based on whether their bets delivered visible wins. Bets that did not pay off can overshadow strong reasoning and thorough preparation.
  • Investments and strategy. Boards and investors praise leaders whose risky moves pay off and criticize similar moves that fail, even when the underlying odds were similar.

These environments share a structural feature: the people who evaluate the decision know how things turned out, while the person who made the decision did not. Separating those two vantage points is difficult, which is why outcome bias is so persistent.

Sources

  1. Fischhoff, B. (1975). Hindsight is not equal to foresight: The effect of outcome knowledge on judgment under uncertainty. Journal of Experimental Psychology: Human Perception and Performance, 1(3), 288–299. https://doi.org/10.1037/0096-1523.1.3.288
  2. Baron, J., & Hershey, J. C. (1988). Outcome bias in decision evaluation. Journal of Personality and Social Psychology, 54(4), 569–579. https://doi.org/10.1037/0022-3514.54.4.569
  3. Berlin, L. (2007). Radiologic errors and malpractice: A blurry distinction. AJR. American Journal of Roentgenology, 189(3), 517–522. https://doi.org/10.2214/AJR.07.2209
  4. Roese, N. J., & Vohs, K. D. (2012). Hindsight bias. Perspectives on Psychological Science, 7(5), 411–426. https://doi.org/10.1177/1745691612454303
  5. Murata, A., Nakamura, T., & Karwowski, W. (2015). Influence of cognitive biases in distorting decision making and leading to critical unfavorable incidents. Safety, 1(1), 44–52. https://doi.org/10.3390/safety1010044
  6. Sezer, O., Gino, F., & Bazerman, M. H. (2015). Ethical blind spots: Explaining unintentional unethical behavior. Current Opinion in Psychology, 6, 77–81. https://doi.org/10.1016/j.copsyc.2015.03.030
  7. Oeberst, A., & Goeckenjan, I. (2016). When being wise after the event results in injustice: Evidence for hindsight bias in judges’ negligence assessments. Psychology, Public Policy, and Law, 22(3), 271–279. https://doi.org/10.1037/law0000091
  8. Aiyer, S., Kam, H. C., Ng, K. Y., Young, N. A., Shi, J., & Feldman, G. (2023). Outcomes affect evaluations of decision quality: Replication and extensions of Baron and Hershey’s (1988) outcome bias Experiment 1. International Review of Social Psychology, 36(1), Article 12. https://doi.org/10.5334/irsp.751
  9. Schemmer, M., Kühl, N., Benz, C., Bartos, A., & Satzger, G. (2023). Appropriate reliance on AI advice: Conceptualization and the effect of explanations. In Proceedings of the 28th International Conference on Intelligent User Interfaces (IUI ’23). ACM. https://doi.org/10.48550/arXiv.2302.02187

About the Author

White guy wearing a white lab coat over a baby blue dress shirt.

Adam Boros

Adam studied at the University of Toronto, Faculty of Medicine for his MSc and PhD in Developmental Physiology, complemented by an Honours BSc specializing in Biomedical Research from Queen's University. His extensive clinical and research background in women’s health at Mount Sinai Hospital includes significant contributions to initiatives to improve patient comfort, mental health outcomes, and cognitive care. His work has focused on understanding physiological responses and developing practical, patient-centered approaches to enhance well-being. When Adam isn’t working, you can find him playing jazz piano or cooking something adventurous in the kitchen.

About us

We are the leading applied research & innovation consultancy

Our insights are leveraged by the most ambitious organizations

Image

I was blown away with their application and translation of behavioral science into practice. They took a very complex ecosystem and created a series of interventions using an innovative mix of the latest research and creative client co-creation. I was so impressed at the final product they created, which was hugely comprehensive despite the large scope of the client being of the world's most far-reaching and best known consumer brands. I'm excited to see what we can create together in the future.

Heather McKee

BEHAVIORAL SCIENTIST

GLOBAL COFFEEHOUSE CHAIN PROJECT

OUR CLIENT SUCCESS

$0M

Annual Revenue Increase

By launching a behavioral science practice at the core of the organization, we helped one of the largest insurers in North America realize $30M increase in annual revenue.

0%

Increase in Monthly Users

By redesigning North America's first national digital platform for mental health, we achieved a 52% lift in monthly users and an 83% improvement on clinical assessment.

0%

Reduction In Design Time

By designing a new process and getting buy-in from the C-Suite team, we helped one of the largest smartphone manufacturers in the world reduce software design time by 75%.

0%

Reduction in Client Drop-Off

By implementing targeted nudges based on proactive interventions, we reduced drop-off rates for 450,000 clients belonging to USA's oldest debt consolidation organizations by 46%

Notes illustration

Eager to learn about how behavioral science can help your organization?