This article series tells the story of where the different streams arose and how they have interacted, beginning with the explosion of interest in the field during and after World War II (for a longer view, see “A Brief History of Decision Making,” by Leigh Buchanan and Andrew O’Connell, HBR, January 2006). The goal is to make you a more informed consumer of decision advice—which just might make you a better decision maker.
Missed the first part of this series? Go back to part 1 (The Rational Revolution).
Irrationality’s Revenge
Almost as soon as von Neumann and Morgenstern outlined their theory of expected utility, economists began adopting it not just as a model of rational behavior but as a description of how people actually make decisions. “Economic man” was supposed to be a rational creature; since rationality now included assessing probabilities in a consistent way, economic man could be expected to do that, too. For those who found this a bit unrealistic, Savage and the economist Milton Friedman wrote in 1948, the proper analogy was to an expert billiards player who didn’t know the mathematical formulas governing how one ball would carom off another but “made his shots as if he knew the formulas.”
Somewhat amazingly, that’s where economists left things for more than 30 years. It wasn’t that they thought everybody made perfect probability calculations; they simply believed that in free markets, rational behavior would usually prevail.
The question of whether people actually make decisions in the ways outlined by von Neumann and Savage was thus left to the psychologists. Ward Edwards was the pioneer, learning about expected utility and Bayesian methods from his Harvard statistics professor and writing a seminal 1954 article titled “The Theory of Decision Making” for a psychology journal. This interest was not immediately embraced by his colleagues—Edwards was dismissed from his first job, at Johns Hopkins, for focusing too much on decision research. But after a stint at an Air Force personnel research center, he landed at the University of Michigan, a burgeoning center of mathematical psychology. Before long he lured Jimmie Savage to Ann Arbor and began designing experiments to measure how well people’s probability judgments followed Savage’s axioms.
A typical Edwards experiment went like this: Subjects were shown two bags of poker chips—one containing 700 red chips and 300 blue chips, and the other the opposite. Subjects took a few chips out of a random bag and then estimated the likelihood that they had the mostly blue bag or the mostly red one.
Say you got eight red chips and four blue ones. What’s the likelihood that you had the predominantly red bag? Most people gave an answer between 70% and 80%. According to Bayes’ Theorem, the likelihood is actually 97%. Still, the changes in subjects’ probability assessments were “orderly” and in the correct direction, so Edwards concluded in 1968 that people were “conservative information processors”—not perfectly rational according to the rules of decision analysis, but close enough for most purposes.
In 1969 Daniel Kahneman, of the Hebrew University of Jerusalem, invited a colleague who had studied with Edwards at the University of Michigan, Amos Tversky, to address his graduate seminar on the practical applications of psychological research. Tversky told the class about Edwards’s experiments and conclusions. Kahneman, who had not previously focused on decision research, thought Edwards was far too generous in his assessment of people’s information-processing skills, and before long he persuaded Tversky to undertake a joint research project. Starting with a quiz administered to their fellow mathematical psychologists at a conference, the pair conducted experiment after experiment showing that people assessed probabilities and made decisions in ways systematically different from what the decision analysts advised.