Evolution Of Decision Making (2/3): Irrationality’s Revenge

by Justin Fox

This article series tells the story of where the different streams arose and how they have interacted, beginning with the explosion of interest in the field during and after World War II (for a longer view, see “A Brief History of Decision Making,” by Leigh Buchanan and Andrew O’Connell, HBR, January 2006). The goal is to make you a more informed consumer of decision advice—which just might make you a better decision maker.

 

Missed the first part of this series? Go back to part 1 (The Rational Revolution).

Irrationality’s Revenge

Almost as soon as von Neumann and Morgenstern outlined their theory of expected utility, economists began adopting it not just as a model of rational behavior but as a description of how people actually make decisions. “Economic man” was supposed to be a rational creature; since rationality now included assessing probabilities in a consistent way, economic man could be expected to do that, too. For those who found this a bit unrealistic, Savage and the economist Milton Friedman wrote in 1948, the proper analogy was to an expert billiards player who didn’t know the mathematical formulas governing how one ball would carom off another but “made his shots as if he knew the formulas.”

Somewhat amazingly, that’s where economists left things for more than 30 years. It wasn’t that they thought everybody made perfect probability calculations; they simply believed that in free markets, rational behavior would usually prevail.

The question of whether people actually make decisions in the ways outlined by von Neumann and Savage was thus left to the psychologists. Ward Edwards was the pioneer, learning about expected utility and Bayesian methods from his Harvard statistics professor and writing a seminal 1954 article titled “The Theory of Decision Making” for a psychology journal. This interest was not immediately embraced by his colleagues—Edwards was dismissed from his first job, at Johns Hopkins, for focusing too much on decision research. But after a stint at an Air Force personnel research center, he landed at the University of Michigan, a burgeoning center of mathematical psychology. Before long he lured Jimmie Savage to Ann Arbor and began designing experiments to measure how well people’s probability judgments followed Savage’s axioms.
A typical Edwards experiment went like this: Subjects were shown two bags of poker chips—one containing 700 red chips and 300 blue chips, and the other the opposite. Subjects took a few chips out of a random bag and then estimated the likelihood that they had the mostly blue bag or the mostly red one.

Say you got eight red chips and four blue ones. What’s the likelihood that you had the predominantly red bag? Most people gave an answer between 70% and 80%. According to Bayes’ Theorem, the likelihood is actually 97%. Still, the changes in subjects’ probability assessments were “orderly” and in the correct direction, so Edwards concluded in 1968 that people were “conservative information processors”—not perfectly rational according to the rules of decision analysis, but close enough for most purposes.

In 1969 Daniel Kahneman, of the Hebrew University of Jerusalem, invited a colleague who had studied with Edwards at the University of Michigan, Amos Tversky, to address his graduate seminar on the practical applications of psychological research. Tversky told the class about Edwards’s experiments and conclusions. Kahneman, who had not previously focused on decision research, thought Edwards was far too generous in his assessment of people’s information-processing skills, and before long he persuaded Tversky to undertake a joint research project. Starting with a quiz administered to their fellow mathematical psychologists at a conference, the pair conducted experiment after experiment showing that people assessed probabilities and made decisions in ways systematically different from what the decision analysts advised.

“In making predictions and judgments under uncertainty, people do not appear to follow the calculus of chance or the statistical theory of prediction,” they wrote in 1973. “They rely on a limited number of heuristics which sometimes yield reasonable judgments and sometimes lead to severe and systematic errors.”

Heuristics are rules of thumb—decision-making shortcuts. Kahneman and Tversky didn’t think relying on them was always a bad idea, but they focused their work on heuristics that led people astray. Over the years they and their adherents assembled a long list of these decision-making flaws—the availability heuristic, the endowment effect, and so on.

As an academic movement, this was brilliantly successful. Kahneman and Tversky not only attracted a legion of followers in psychology but also inspired a young economist, Richard Thaler, and with help from him and others came to have a bigger impact on the field than any outsider since von Neumann. Kahneman won an economics Nobel in 2002—Tversky had died in 1996 and thus couldn’t share the prize—and the heuristics-and-biases insights relating to money became known as behavioral economics. The search for ways in which humans violate the rules of rationality remains a rich vein of research for scholars in multiple fields.

The implications for how to make better decisions, though, are less clear. First-generation decision analysts such as Howard Raiffa and Ward Edwards recognized the flaws described by Kahneman and Tversky as real but thought the focus on them was misplaced and led to a fatalistic view of man as a “cognitive cripple.” Even some heuristics-and-biases researchers agreed. “The bias story is so captivating that it overwhelmed the heuristics story,” says Baruch Fischhoff, a former research assistant of Kahneman and Tversky who has long taught at Carnegie Mellon University. “I often cringe when my work with Amos is credited with demonstrating that human choices are irrational,” Kahneman himself wrote inThinking, Fast and Slow. “In fact our research only showed that humans are not well described by the rational-agent model.” And so a new set of decision scholars began to examine whether those shortcuts our brains take are actually all that irrational.

When Heuristics Work

That notion wasn’t entirely new. Herbert Simon, originally a political scientist but later a sort of social scientist of all trades (the economists gave him a Nobel in 1978), had begun using the term “heuristic” in a positive sense in the 1950s. Decision makers seldom had the time or mental processing power to follow the optimization process outlined by the decision analysts, he argued, so they “satisficed” by taking shortcuts and going with the first satisfactory course of action rather than continuing to search for the best.

Simon’s “bounded rationality,” as he called it, is often depicted as a precursor to the work of Kahneman and Tversky, but it was different in intent. Whereas they showed how people departed from the rational model for making decisions, Simon disputed that the “rational” model was actually best. In the 1980s others began to join in the argument.

The most argumentative among them was and still is Gerd Gigerenzer, a German psychology professor who also did doctoral studies in statistics. In the early 1980s he spent a life-changing year at the Center for Interdisciplinary Research in the German city of Bielefeld, studying the rise of probability theory in the 17th through 19th centuries with a group of philosophers and historians. One result was a well-regarded history, The Empire of Chance, by Gigerenzer and five others (Gigerenzer’s name was listed first because in keeping with the book’s theme, the authors drew lots). Another was a growing conviction in Gigerenzer’s mind that the Bayesian approach to probability favored by the decision analysts was, although not incorrect, just one of several options.

When Gigerenzer began reading Kahneman and Tversky, he says now, he did so “with a different eye than most readers.” He was, first, dubious of some of the results. By tweaking the framing of a question, it is sometimes possible to make apparent cognitive illusions go away. Gigerenzer and several coauthors found, for example, that doctors and patients are far more likely to assess disease risks correctly when statistics are presented as natural frequencies (10 out of every 1,000) rather than as percentages.

But Gigerenzer wasn’t content to leave it at that. During an academic year at Stanford’s Center for Advanced Study in the Behavioral Sciences, in 1989–1990, he gave talks at Stanford (which had become Tversky’s academic home) and UC Berkeley (where Kahneman then taught) fiercely criticizing the heuristics-and-biases research program. His complaint was that the work of Kahneman, Tversky, and their followers documented violations of a model, Bayesian decision analysis, that was itself flawed or at best incomplete. Kahneman encouraged the debate at first, Gigerenzer says, but eventually tired of his challenger’s combative approach. The discussion was later committed to print in a series of journalarticles, and after reading through the whole exchange, it’s hard not to share Kahneman’s fatigue.

Gigerenzer is not alone, though, in arguing that we shouldn’t be too quick to dismiss the heuristics, gut feelings, snap judgments, and other methods humans use to make decisions as necessarily inferior to the probability-based verdicts of the decision analysts. Even Kahneman shares this belief to some extent. He sought out a more congenial discussion partner in the psychologist and decision consultant Gary Klein. One of the stars of Malcolm Gladwell’s book Blink, Klein studies how people—firefighters, soldiers, pilots—develop expertise, and he generally sees the process as being a lot more naturalistic and impressionistic than the models of the decision analysts. He and Kahneman have together studied  when going with the gut works and concluded that, in Klein’s words, “reliable intuitions need predictable situations with opportunities for learning.”

Are those really the only situations in which heuristics trump decision analysis? Gigerenzer says no, and the experience of the past few years (the global financial crisis, mainly) seems to back him up. When there’s lots of uncertainty, he argues, “you have to simplify in order to be robust. You can’t optimize any more.” In other words, when the probabilities you feed into a decision-making model are unreliable, you might be better off following a rule of thumb. One of Gigerenzer’s favorite examples of this comes from Harry Markowitz, the creator of the decision analysis cousin known as modern portfolio theory, who once let slip that in choosing the funds for his retirement account, he had simply split the money evenly among the options on offer (his allocation for each was 1/N). Subsequent research has shown that this so-called 1/N heuristic isn’t a bad approach at all.

 

Liked this? Go to part 3 (Current State).

 

This article originally appeared in [https://hbr.org/2015/05/from-economic-man-to-behavioral-economics] and belongs to the creators.