Nudges Work (and practitioners know exactly how well)
Foreword
At TDL, our goal is to make behavioral science accessible to the masses. This article is part of a series on cutting edge research that has the potential to create positive social impact. While the research is inherently specific, we believe that the insights gleaned from each piece in this series are relevant to behavioral science practitioners in many different fields. As a socially conscious applied research firm, we are always looking for ways to translate science into impact. If you would like to chat with us about a potential collaboration, feel free to contact us.
Introduction
Behavioral science is opening up new lines of inquiry across all sorts of areas, both in academia and in the public sector. One of the places that behavioral science has been most effective is in public policy. Most famously represented by the Behavioral Insights Team, behavioral scientists are changing the content and delivery of policy all over the globe. As a socially-conscious applied research firm, TDL is interested in using empathy, technology, and design-thinking to promote better outcomes in many aspects of society, from health to education to the economic empowerment of disadvantaged groups. To amplify these impacts even further, we reach out to experts currently conducting research in areas that engage behavioral science in the pursuit of socially conscious goals.
With this in mind, The Decision Lab touched base with Elizabeth Linos and Stefano DellaVigna, two prominent academics who study economics, public policy and behavioral science.
Dr. Elizabeth Linos is an assistant professor of public policy at UC Berkeley. Her research lies at the intersection of public management and behavioral science, which involves using tools from behavioral science to improve government service delivery. She was formerly the VP and Head of Research and Evaluation at the Behavioral Insights Team in North America, where she worked with government agencies in the US and the UK to improve programs using behavioral science and to build capacity around rigorous evaluation.
Dr. Stefano DellaVigna is the Daniel Koshland, Sr. Distinguished Professor of Economics and Professor of Business Administration at the University of California, Berkeley. He is the co-director of the Initiative for Behavioral Economics and Finance and a co-editor of the American Economics Review. He has studied the economics of the media, the design of model-based field experiments, the analysis of scientific journals, and reference-dependence for unemployed workers.
In their article, Dr. Linos and Dr. DellaVigna investigated the effectiveness of nudging based on data provided by two of the largest Nudge units in North America.
A full version of the article is available here: https://eml.berkeley.edu/~sdellavi/wp/NudgeToScale2020-05-09.pdf
Transcript
Nathan: How would you describe the focus of your research to a general audience?
Dr. Linos: The focus of my research is how to use what we know about how people actually behave, drawing on decades of research from psychology and economics, to improve how governments are able to deliver services. In my case, this means thinking about how to recruit, retain, and support government workers in delivering better services. It also means improving the way governments reach out to residents about programs and services for which they are eligible.
Nathan: How did you bring those broad themes into a specific project?
Dr. Linos: In this project, we were hoping to better understand the impact of behavioral science units in governments. “Nudge Units,” as they are often called, have become very popular across the world with over 200 units in various countries dedicated to using behavioral science to improve government service delivery. These units have done something many academics have only dreamed of: they have normalized the use of rigorous evaluation (randomized controlled trials) at scale, by running hundreds of well-designed trials in policy areas ranging from education to the social safety net take-up to public transportation. Our goal was to understand what the average effect of these nudges are at scale, to better understand whether this nudge approach can meaningfully make a difference when taken outside of labs and individual academic studies to broad scale-up.
Nathan: Can you give us an overview of your experimental approach?
Dr. Linos: First, we analyzed hundreds of trials (that have not already been published) conducted by two of the largest “Nudge Units” in the US: the Office of Evaluation Sciences and the Behavioral Insights Team North America. These two units have worked at the federal, state, and local levels and gave us full access to every trial they have run since 2015. It’s noteworthy that this involved a remarkable case of transparency and academic documentation. We estimated the average effect of a nudge across all trials and then compared our results to existing meta-analyses of “nudge” trials that are published in the academic literature. The majority of the project explores why there is a large gap between the average effect of a nudge, when you look at Nudge Units, and the average effect of a nudge if you just look at recent meta-analyses. We considered various options: selective publication, the difference in the characteristics of trials, and differences in the characteristics of nudges.
Nathan: What were your findings?
Dr. Linos: First, we found that the average treatment effect of a nudge is statistically significant and positive: across all trials, nudges increase take-up by approximately 1.4 percentage points (or an 8% increase above a control group). If you were to look at recently published meta-analyses of academic papers, the average effect of a nudge would be over 8 percentage points. We find that we can completely close the gap between these two estimates when we consider selective publication in academic papers. That is, there are probably trials with null results or negative results that are conducted by academics that are not published, or even written up. This is an issue that is often called “the file drawer” problem. It leads to an overestimate of the average effect of a nudge in the academically published literature. We can also close the gap by about two thirds by considering differences in the types of nudges run by academics and those run by Nudge Units. Some of these differences go hand in hand with going to scale; for example, trials with in-person interventions are more effective but also much less likely at scale.
Nathan: How do you think this is relevant to an applied setting (i.e. in business or public policy)?
Dr. Linos: These results are particularly relevant to an applied setting because they provide an optimistic but realistic estimate of what is possible with a nudge. On the one hand, we now have clear evidence that, on average, nudges conducted across a variety of settings and government agencies are effective compared to a well-defined comparison group. This is no small feat. As a reminder, the majority of policies and programs implemented in government are not rigorously evaluated at all, and we often see cases of programs that were thought to be effective but ended up being ineffective, once they were put under the scrutiny of rigorous evaluation. At the same time, the likely impact of any given nudge is probably smaller than what policy makers would predict, if they only looked at the academic literature. This means that businesses or policymakers may need to move beyond “nudges” to achieve a larger impact. Nudge Units themselves acknowledge this — many behavioral science teams and experts are already exploring how to use insights from behavioral science to design better policies, better legislations, and rethink programs as a whole. Nudges are just one small part of the toolbox.
Nathan: Do you see future research stemming from your study? In what directions?
Dr. Linos: There are many additional questions this research spurs. First, in trying to understand the overall impact of Nudge Units on policy, it’s important to document what happens after a trial. That is, once we have evidence that something works better than the status quo, we want to know how quickly this knowledge spreads to other policy makers and how and when it gets implemented as the new status quo. We’re also interested in more deeply understanding the cost-benefit calculations that would quantify the exact value of a 1.4 percentage point increase in take-up. Last, we hope our research will spur an ongoing conversation about how to document and share results from all trials, irrespective of whether or not they are published in top academic journals.
About the Authors
Elizabeth Linos
Dr. Elizabeth Linos is an assistant professor of public policy at UC Berkeley. Her research lies at the intersection of public management and behavioral science, which involves using tools from behavioral science to improve government service delivery. She was formerly the VP and Head of Research and Evaluation at the Behavioral Insights Team in North America, where she worked with government agencies in the US and the UK to improve programs using behavioral science and to build capacity around rigorous evaluation.
Nathan Collett
Nathan Collett studies decision-making and philosophy at McGill University. Experiences that inform his interdisciplinary mindset include a fellowship in the Research Group on Constitutional Studies, research at the Montreal Neurological Institute, a Harvard University architecture program, a fascination with modern physics, and several years as a technical director, program coordinator, and counselor at a youth-run summer camp on Gabriola Island. An upcoming academic project will focus on the political and philosophical consequences of emerging findings in behavioral science. He grew up in British Columbia, spending roughly equal time reading and exploring the outdoors, which ensured a lasting appreciation for nature. He prioritizes creativity, inclusion, sustainability, and integrity in all of his work.
Stefano DellaVigna
Dr. Stefano DellaVigna is the Daniel Koshland, Sr. Distinguished Professor of Economics and Professor of Business Administration at the University of California, Berkeley. He is the co-director of the Initiative for Behavioral Economics and Finance and a co-editor of the American Economics Review. He has studied the economics of the media, the design of model-based field experiments, the analysis of scientific journals, and reference-dependence for unemployed workers.
About us
We are the leading applied research & innovation consultancy
Our insights are leveraged by the most ambitious organizations
“
I was blown away with their application and translation of behavioral science into practice. They took a very complex ecosystem and created a series of interventions using an innovative mix of the latest research and creative client co-creation. I was so impressed at the final product they created, which was hugely comprehensive despite the large scope of the client being of the world's most far-reaching and best known consumer brands. I'm excited to see what we can create together in the future.
Heather McKee
BEHAVIORAL SCIENTIST
GLOBAL COFFEEHOUSE CHAIN PROJECT
OUR CLIENT SUCCESS
$0M
Annual Revenue Increase
By launching a behavioral science practice at the core of the organization, we helped one of the largest insurers in North America realize $30M increase in annual revenue.
0%
Increase in Monthly Users
By redesigning North America's first national digital platform for mental health, we achieved a 52% lift in monthly users and an 83% improvement on clinical assessment.
0%
Reduction In Design Time
By designing a new process and getting buy-in from the C-Suite team, we helped one of the largest smartphone manufacturers in the world reduce software design time by 75%.
0%
Reduction in Client Drop-Off
By implementing targeted nudges based on proactive interventions, we reduced drop-off rates for 450,000 clients belonging to USA's oldest debt consolidation organizations by 46%