In 1975, American psychologist Baruch Fischhoff published, “Hindsight ≠ foresight: The effect of outcome knowledge on judgement under uncertainty.”2 Fischhoff developed a method to examine hindsight bias, a cognitive bias where an event’s outcome seems more predictable after we know what happened.
Fischhoff’s method consisted of presenting participants with four possible outcomes to a short story.2 Some participants were told which one of the four outcomes was true; other participants were not given any information. Then, all participants were asked to determine the likelihood of each outcome. Fischhoff found that, when a participant was told an outcome was true, they frequently assigned a higher probability to that outcome. On top of overestimating the probability of outcomes for which they had extra information, participants also failed to reconstruct their prior, less knowledgeable states of mind.
Stemming from Fischoff’s work on hindsight bias, the term “curse of knowledge” was first used in the 1989 article “The curse of knowledge in economic settings: An experimental analysis” by economists Colin Camerer, George Loewenstein, and Martin Weber.3 They credited British-American psychologist Robin Hogarth with coining the term, and they explored the curse of knowledge in the context of economic transactions.
Their study observed that different economic agents have different amounts of knowledge.3 Sellers tend to know more about the value of their products than prospective buyers; workers tend to know more about their skills than prospective employers. Crucially, Camerer, Loewenstein, and Weber argued that the curse of knowledge perpetuates this informational imbalance even when an agent wants to convey what they know. They also argued that this unintentional imbalance had two consequences:
- Better-informed agents might suffer losses – having more information might hurt us!
- The curse of knowledge can mitigate market consequences that result from information asymmetry. A fruit seller might lower prices to reflect unobservable defects.
Following Camerer, Loewenstein, and Weber’s work, Elizabeth Newton, a graduate student in psychology at Stanford in 1990, developed an experiment that is now a classic example of the curse of knowledge.1 4 She asked participants to tap out popular songs with their fingers (known as the “tappers”), before predicting how many of those tapped melodies would be recognized by other people (known as the “listeners”). She also asked the tappers to predict how many people would guess the melody correctly.
In a sample of 120 melodies, listeners got it right only 2.5% of the time.1 The tappers had predicted a 50% success rate: they grossly overestimated how well the listeners could guess, due to the curse of knowledge. Once they were given a song to tap, they couldn’t help but hear the melody that their tapping was based on, so they assumed the listeners would also hear the melody.4 In reality, all the listeners heard was a random series of taps.1
The curse of knowledge was popularized in the 2007 book, Made to Stick: Why Some Ideas Survive and Others Die.10 Therein, Chip and Dan Heath brothers explore the concept of “stickiness:” making ideas become memorable and interesting. Their claim is that, by making ideas sticky, we can avoid the curse of knowledge: they become so memorable we never forget them in the first place. If our memory of a given choice is really memorable, we would be less prone to remembering and reevaluating that choice in accordance with what we know now.