Evolution Of Decision Making (1/3): The Rational Revolution

by Justin Fox

When we make decisions, we make mistakes. We all know this from personal experience, of course. But just in case we didn’t, a seemingly unending stream of experimental evidence in recent years has documented the human penchant for error. This line of research—dubbed heuristics and biases, although you may be more familiar with its offshoot, behavioral economics—has become the dominant academic approach to understanding decisions. Its practitioners have had a major influence on business, government, and financial markets. Their books—Predictably Irrational; Thinking, Fast and Slow; and Nudge, to name three of the most important—have suffused popular culture.

So far, so good. This research has been enormously informative and valuable. Our world, and our understanding of decision making, would be much poorer without it.

Another way of thinking about decision making

It is not, however, the only useful way to think about making decisions. Even if you restrict your view to the academic discussion, there are three distinct schools of thought. Although heuristics and biases is currently dominant, for the past half century it has interacted with and sometimes battled with the other two, one of which has a formal name—decision analysis—and the other of which can perhaps best be characterized as demonstrating that we humans aren’t as dumb as we look.

Adherents of the three schools have engaged in fierce debates, and although things have settled down lately, major differences persist. This isn’t like David Lodge’s aphorism about academic politics being so vicious because the stakes are so small. Decision making is important, and decision scholars have had real influence.

This article series tells the story of where the different streams arose and how they have interacted, beginning with the explosion of interest in the field during and after World War II (for a longer view, see “A Brief History of Decision Making,” by Leigh Buchanan and Andrew O’Connell, HBR, January 2006). The goal is to make you a more informed consumer of decision advice—which just might make you a better decision maker.

The Rational Revolution

During World War II statisticians and others who knew their way around probabilities (mathematicians, physicists, economists) played an unprecedented and crucial role in the Allied effort. They used analytical means—known as operational research in the UK and operations research on this side of the Atlantic—to improve quality control in manufacturing, route ships more safely across the ocean, figure out how many pieces antiaircraft shells should break into when they exploded, and crack the Germans’ codes.

After the war hopes were high that this logical, statistical approach would transform other fields. One famous product of this ambition was the nuclear doctrine of mutual assured destruction. Another was decision analysis, which in its simplest form amounts to (1) formulating a problem, (2) listing the possible courses of action, and (3) systematically assessing each option. Historical precedents existed—Benjamin Franklin had written in the 1770s of using a “Moral or Prudential Algebra” to compare options and make choices. But by the 1950s there was tremendous interest in developing a standard approach to weighing options in an uncertain future.

The mathematician John von Neumann, who coined the term mutual assured destruction, helped jump-start research into decision making with his notion of “expected utility.” As outlined in the first chapter of his landmark 1944 bookTheory of Games and Economic Behavior,written with the economist Oskar Morgenstern, expected utility is what results from combining imagined events with probabilities. Multiply the likelihood of a result against the gains that would accrue, and you get a number, expected utility, to guide your decisions.

It’s seldom that simple, of course. Von Neumann built his analysis around the game of poker, in which potential gains are easily quantifiable. In lots of life decisions, it’s much harder. And then there are the probabilities: If you’re uncertain, how are you supposed to know what those are?

The winning answer was that there is no one right answer—everybody has to wager a guess—but there is one correct way to revise probabilities as new information comes in. That is what has become known as Bayesian statistics, a revival and advancement of long-dormant ideas (most of them the work not of the English reverend Thomas Bayes but of the French mathematical genius Pierre-Simon Laplace) by a succession of scholars starting in the 1930s. For the purposes of storytelling simplicity I’ll mention just one: Leonard Jimmie Savage, a statistics professor whose 1954 book The Foundations of Statistics laid out the rules for changing one’s probability beliefs in the face of new information.

Products of the rational revolution

One early and still-influential product of this way of thinking is the theory of portfolio selection, outlined in 1952 by Savage’s University of Chicago student Harry Markowitz, which advised stock pickers to estimate both the expected return on a stock and the likelihood that their estimate was wrong. Markowitz won a Nobel prize for this in 1990.

The broader field of decision analysis began to come together in 1957, when the mathematician Howard Raiffa arrived at Harvard with a joint appointment in the Business School and the department of statistics. He soon found himself teaching a statistics course for business students with Robert Schlaifer, a classics scholar and fast learner who in the postwar years taught pretty much whatever needed teaching at HBS. The two concluded that the standard statistics fare of regressions and P values wasn’t all that useful to future business leaders, so they adopted a Bayesian approach. Before long what they were teaching was more decision making than statistics. Raiffa’s decision trees, with which students calculated the expected value of the different paths available to them, became a staple at HBS and the other business schools that emulated this approach.

The actual term “decision analysis,” though, was coined by Ronald Howard, an MIT electrical engineer and an expert in statistical processes who had studied with some of the leading figures in wartime operations research at MIT and crossed paths with Raiffa in Cambridge. While visiting Stanford for the 1964–1965 academic year, Howard was asked to apply the new decision-making theories to a nuclear power plant being contemplated at General Electric’s nuclear headquarters, then located in San Jose. He combined expected utility and Bayesian statistics with computer modeling and engineering techniques into what he dubbed decision analysis and some of his followers call West Coast decision analysis, to distinguish it from Raiffa’s approach. Howard and Raiffa were honored as the two founding fathers of the field at its 50th-anniversary celebration last year.

 

Liked this? Read part 2 (Irrationality’s Revenge).

 

This article originally appeared in [https://hbr.org/2015/05/from-economic-man-to-behavioral-economics] and belongs to the creators.