Thinking Fast: When Intuition Isn’t All Bad, According to its Biggest Critic

We rely on our gut for everything from choosing options at a restaurant to making high-stakes career choices. Behavioral science, however, has given intuition a bad rap. Over the years, the literature on cognitive biases and heuristics has categorized the numerous ways in which our impulses lead us to commit frequent, predictable mistakes. 

Behavioral science is all about unpacking human decisions and understanding what is driving our choices beneath the surface. Since Daniel Kahneman’s early work on cognitive errors, we have celebrated deliberate choices as a panacea for the problems that stem from our imperfect decision-making faculties. The classic directives are to stifle our intuition and assess a choice in the context of objective outside information: bring in base rates, check your priors, consider underlying motivations, and then make a decision. 

However, we can’t always stop and think about our choices before they happen. Many of our biases save us time and energy, so that we can handle a world that is constantly in flux. 

The word choice evokes scenarios where one has time to consider several options, perform some sort of calculus, and then emerge with a preference. But what about the decisions we make that don’t really feel like choices at all? When we follow our instincts, it can feel like we’re being pulled towards a certain option, and the process of decision-making becomes one of traffic control. Should we follow this urge or rein it in? 

Titans talking past each other

Kahneman, Thaler, Sunstein, and other big names in behavioral science have shown us ways that our intuition sends us headfirst into suboptimal scenarios. There are other thinkers, however, such as Gary Klein, who see things from a different perspective. Klein has respect for experts that make snap judgments in difficult situations, while Kahneman is characterized as skeptical of their expertise. This has led to claims that behavioral science is stuck on the problem of whether we can really trust ourselves. Somewhat recently, however, Kahneman and Klein took some time to figure out where their disagreement actually comes from. The biggest finding? They don’t disagree all that much. The differences in opinion are mostly a matter of perspective. The hard, underlying questions are solved when we look at the environment in which our intuition is going to work. 

When these two titans of behavioral science sat down to work out their differences, they realized they were looking at the same problem from different directions. Klein’s focus is on expert decision-making: what he’s trying to understand is how certain individuals use intuition successfully. He seeks to identify what qualities the experts have that the rest of us don’t, that allow them to often make the right call in situations that would leave the rest of us paralyzed. 

Kahneman, by contrast, cares about how the pros stack up to algorithms. If a human expert and a decision-making algorithm are presented with the same type of problem over and over again, algorithms will always win out in the end; after all, they’re optimized to make the best decision possible over the long run. Statistical thinking always wins in the end but that’s a different problem than the one Klein is trying to answer. 

Where intuition is valuable

As we’ve seen, it would be reductive to say that we shouldn’t use our intuition when making decisions. We have to do so, all the time. This still leaves us with the original problem of figuring out when we can trust our gut versus recognizing situations when we can’t go it alone. The truth is, intuitive thinking is necessary and often effective. The question of when and how to use our intuitions comes down to the nature of the decision-making environment. 

We must look out for two key criteria when assessing an expert’s ability to make gut choices: first, that their decision-making environment is one of high validity, and second, that they have adequate opportunities to practice their judgment. 

An environment of high validity is one where there are reliable cues that hint at the right answer. When we learn how to categorize objects or events, the repeated, consistent observation of certain features is essential to our ability to learn. Imagine, for instance, a young child learning what a “face” is. Every face has two eyes, a nose, and a mouth. In situations with high validity, reliable cues like these will be present, even if we’re not entirely aware of them on a conscious level.

Intuitive judgment doesn’t depend on consciously processing each cue. Rather, we use our procedural memory—the same cognitive system that helps us walk or ride a bike—to learn patterns in a given familiar situation. It may be possible in hindsight to understand how each muscle works in the act of walking, and the same goes for figuring out what prompted the right answer in intuitive judgments, but that isn’t what our body is doing when we make split-second decisions. In the moment, intuitive judgments are characterized by decisions that happen faster than one can rationalize. 

Second, the opportunity to practice one’s judgment comes from repeated cases where one can try out one’s luck at making snap judgments. Even highly predictable scenarios are hard to recognize the first time we encounter them. Cutlery should always be in roughly the same spot, no matter whose kitchen you’re in, but somehow I always have to open 3 or 4 drawers before I get it right whenever I’m in a new space. 

The difference between a firefighter (Klein’s favorite example of an expert decision-maker) and a stockbroker (one of the many “experts” Kahneman likes to skewer) is the validity of their environment. Both get to test out their intuition over and over again, but only the firefighter has reliable cues that can at least be distinguished in hindsight, such as temperature and the sound of the fire. The problem is different for the financial “experts,” as well as political and business leaders, who are faced with widely different situations each time they have to make an impulse decision. Since the stock market does not repeat itself predictably, the broker does not get the practice that is necessary to make reliable snap judgments. 


Breaking down the decision environment into these key criteria builds essential tools for applied behavioral scientists. By ensuring that decision-makers are equipped with the right context to make good choices, we can reliably improve outcomes for people according to their own goals. 

There are cases where we are clearly served well by our instincts. Athletes and emergency workers rely on fast-paced decision-making in order to do their jobs. Other times, our instincts give us unreliable advice. Behavioral science helps us understand when intuition can be improved by algorithms and other simple rule-based systems, and what our cognitive systems need in order to thrive. The field is at its best when we carefully pick apart the places where people need help to navigate complexity, and respect the brilliance of the human mind when it is doing just fine all on its own. 

Read Next

a young woman with a voter in the voting booth. voting in a democracy

Is a Biased Vote Better Than No Vote?

The US Presidential Election is fast approaching, and pressure is mounting for every eligible person to cast their ballot. But these calls to vote can play into our cognitive biases.