So-called “nudge units” are popping up in governments all around the world.
The best-known examples include the U.K.’s Behavioural Insights Team, created in 2010, and the White House-based Social and Behavioral Sciences Team, introduced by the Obama administration in 2014. Their mission is to leverage findings from behavioral science so that people’s decisions can be nudged in the direction of their best intentions without curtailing their ability to make choices that don’t align with their priorities.
Overall, these – and other – governments have made important strides when it comes to using behavioral science to nudge their constituents into better choices.
Yet, the same governments have done little to improve their own decision-making processes. Consider big missteps like the Flint water crisis. How could officials in Michigan decide to place an essential service – safe water – and almost 100,000 people at risk in order to save US$100 per day for three months? No defensible decision-making process should have allowed this call to be made.
When it comes to many of the big decisions faced by governments – and the private sector – behavioral science has more to offer than simple nudges.
Behavioral scientists who study decision-making processes could also help policy-makers understand why things went wrong in Flint, and how to get their arms around a wide array of society’s biggest problems – from energy transitions to how to best approach the refugee crisis in Syria.
When nudges are enough
The idea of nudging people in the direction of decisions that are in their own best interest has been around for a while. But it was popularized in 2008 with the publication of the bestseller “Nudge” by Richard Thaler of the University of Chicago and Cass Sunstein of Harvard.
A common nudge goes something like this: if we want to eat better but are having a hard time doing it, choice architects can reengineer the environment in which we make our food choices so that healthier options are intuitively easier to select, without making it unrealistically difficult to eat junk food if that’s what we’d rather do. So, for example, we can shelve healthy foods at eye level in supermarkets, with less-healthy options relegated to the shelves nearer to the floor.
Likewise, if we want to encourage more people to be organ donors, choice architects can design the form we fill out at the DMV so that the choice we make without thinking is the one that may allow us to save someone’s life in the future.
In my own research group, we lump these kinds of interventions under the umbrella of passive decision support because they don’t require a lot of effort on the part of a decision-maker. Indeed, these approaches are about exploiting – not correcting – the judgmental biases that people bring with them to all manner of decisions, large and small.
Since the publication of “Nudge,” there has been a proliferation of interest in bringing choice architecture into the policy mainstream. Even institutions like the World Bank and the Organization of Economic Cooperation and Development are rolling out their own nudge units. And, you shouldn’t be surprised to learn that the private sector has jumped on the increasingly crowded bandwagon of for-profit nudging.
We’ve successfully tested nudges for water conservation and sustainable food choice. Others have applied nudges to an even broader range of contexts. There’s no denying that choice architecture can work like gangbusters, which explains the widespread interest.
Sometimes a nudge isn’t enough
Nudges work for a wide array of choices, from ones we face every day to those that we face infrequently. Likewise, nudges are particularly well-suited to decisions that are complex with lots of different alternatives to choose from. And, they are advocated in situations where the outcomes of our decisions are delayed far enough into the future that they feel uncertain or abstract. This describes many of the big decisions policy-makers face, so it makes sense to think the solution must be more nudge units.
But herein lies the rub. For every context where a nudge seems like a realistic option, there’s at least another context where the application of passive decision support would be either be impossible – or, worse, a mistake.
Take, for example, the question of energy transitions. These transitions are often characterized by the move from infrastructure based on fossil fuels to renewables to address all manner of risks, including those from climate change. These are decisions that society makes infrequently. They are complex. And, the outcomes – which are based on our ability to meet conflicting economic, social and environmental objectives – will be delayed.
But, absent regulation that would place severe restrictions on the kinds of options we could choose from – and which, incidentally, would violate the freedom-of-choice tenet of choice architecture – there’s no way to put renewable infrastructure options at proverbial eye level for state or federal decision-makers, or their stakeholders.
Simply put, a nudge for a decision like this would be impossible. In these cases, decisions have to be made the old-fashioned way: with a heavy lift instead of a nudge.
Often, decisions are more complex
Complex policy decisions like this require what we call active decision support.
In these cases, specialists trained in the science of decision-making must work with people both to help them to overcome predictable biases and to approach decisions in a way that is different from how they might otherwise make them instinctively. To inform and structure these kinds of decisions, we – like choice architects – also look to insights from the behavioral sciences.
For example, we have a rich understanding of the decision-making shortcuts that people apply, as well as of the predictable biases that accompany them. So, we know what to be on the lookout for when we help individuals and groups make better decisions.
When evaluating problems that unfold over long periods of time, we know that people tend not to look at cumulative effects, or consider how choices made today may restrict the choices that can be made in the future.
Likewise, we see that decision-makers struggle with questions about how to put boundaries around the problem before them. For example, who really counts as a legitimate stakeholder, and who doesn’t? Likewise, are there hard deadlines or financial ceilings that must be obeyed? Or are these really soft constraints that can be challenged if the right option can be identified?
We’ve also learned that decision-makers often fail to adequately account for the broad range of objectives that ought to guide their decisions, as well as the performance measures that let them know if they’ve achieved them. And, we know that the manner in which people search for alternatives is often incremental at best. People look to obvious and easy-to-find options, the tendency that nudges exploit, at the expense of the creativity that’s required to address the really complex challenges.