work table to notebooks and data graphs

How Effective Is Nudging?

Foreword

At TDL, our role is to translate science. This article is part of a series on cutting edge research that has the potential to create positive social impact. While the research is inherently specific, we believe that the insights gleaned from each piece in this series are relevant to behavioral science practitioners in many different fields. At TDL, we are always looking for ways to translate science into impact. If you would like to chat with us about a potential collaboration, feel free to contact us.

Introduction

The concept of nudging has recently grown in popularity. This is partially due to how exciting and innovative these types of interventions can be. But, what might be more important than their innovativeness and excitability is if they actually work. And if they do, which conditions are important for implementing nudges, and what can we learn from studying them on a large scale?

As an applied behavioral science research firm, The Decision Lab is interested in learning more about the effectiveness of nudges and how they can be better implemented to drive social change. To further this interest, we reached out to Dr. Dennis Hummel and Prof. Alexander Maedche to learn about their work on studying the effectiveness of nudges and their attempt at classifying them with the purpose of guiding future research.

A full version of some of Dennis and Alexander’s studies are available here:

Behavioral Science, Democratized

We make 35,000 decisions each day, often in environments that aren’t conducive to making sound choices. 

At TDL, we work with organizations in the public and private sectors—from new startups, to governments, to established players like the Gates Foundation—to debias decision-making and create better outcomes for everyone.

More about our services

Who can be nudged? Examining nudging effectiveness in the context of need for cognition and need for uniqueness

How effective is nudging? A quantitative review on the effect sizes and limits of empirical nudging studies

Designing adaptive nudges for multi-channel choices of digital services: A laboratory experiment design

Improving Digital Nudging Using Attentive User Interfaces: Theory Development and Experiment Design

Accumulation and Evolution of Design Knowledge in Design Science Research – A Journey Through Time and Space

How would you describe the focus of your research in simple terms?

Nudges are “any aspects of the choice architecture that alters people’s behavior in a predictable way without forbidding any options or significantly changing their economic incentives”.1 As nudges are very popular, they affect our everyday lives. For example, through governmental nudge units, such as the Behavioral Insights Team in the UK or the former Social and Behavioral Sciences Team in the US. Yet, it has never really been investigated on a large scale if nudges really work and, if so, under which conditions. One of the authors of the original nudging book has even dedicated a separate journal paper on “nudges that fail”.2 With our research,3 we aimed to estimate the effectiveness of nudging. Moreover, we wanted to design a classification system that can serve as a guide for future nudging studies.

How would you explain your research question to the general public? 

Well, we follow two main research questions. On the one hand, we want to judge whether the hype around nudging can be documented with scientific data. We ask ourselves by how much nudges decrease or increase an outcome compared with a control group that received no nudge. 

In addition, we want to know what would be the influencing factors of a divergent effectiveness. As nudging is a broad concept, some types of nudges or contexts could be more effective than others. On the other hand, we asked ourselves whether there is a way to classify all nudges studies into one comprehensive system, such as a taxonomy or a morphological box.

What did you think you’d find, and why?

As we followed an explorative approach, we did not formulate any explicit hypotheses. However, we of course expected that nudging would be highly effective. Based on previous literature reviews on nudging, we thought that in particular defaults would be among the most effective types of nudges. Moreover, we expected that nudges might differ by the context, for example, energy, the environment, etc. and by distinguishing between offline nudging and digital nudging which is a rather new concept introduced by Weinmann et al.4 Finally, we also hoped to find rather practical information such as in which countries the fewest nudging studies have been conducted or which types of nudges have been used rarely to offer avenues of future research to other researchers. As for the classification system, we were entirely curious and open as taxonomies or morphological boxes are developed along the process.

What sort of process did you follow?

We first conducted a systematic literature review. Literature reviews are performed about as follows: After defining a goal, a search strategy, keywords and databases, we ran a keyword combination in several academic databases. We had a broad set of keywords and found about 2,500 papers which then had to be screened based on the title, the keywords and the abstract. After the screening, we read 280 papers in full to distill the 100 relevant papers for our analysis (it was really a coincidence that it ended up on such a round number). These papers were then analyzed in detail extracting the type of the nudge, the effect size, the context and other relevant information. In the end, we created a database, which is available on request, with more than 300 different nudging treatments and extracted more than 20 characteristics for each treatment. To design the morphological box, we followed the recommendations from Nickerson et al.5

What did you end up finding out?

We found much more than we could ever present in one academic paper. First, our analysis revealed that only 62% of the nudging treatments are statistically significant which is much lower than we initially expected. Nudges have a median effect size of 21%, which depends on the type of nudge and the context. As expected, defaults are the most effective nudges while precommitment strategies (i.e. you commit now to do something in the future) are the least effective. Moreover, digital nudging is similarly effective as offline nudging but it offers new possibilities to individualize nudges. This means that digital nudges can be adapted more easily to the individual characteristics of the decision-makers (see a brand-new study for more information: Ingendahl et al., 2020). Finally, we developed a morphological box which categorizes empirical nudging studies along eight dimensions.

The AI Governance Challenge book
eBook

The AI Governance Challenge

How do you think this is relevant to an applied setting?  

When the paper was published, many people from business or civil society contacted us to learn more about the applications of the results. Managers often asked for the most effective nudge to increase sales which is of course not the purpose of nudging (but can then rather be classified as a form of manipulation). But for public goods, our results show for instance that changing the effort, reminders and feedback are effective nudges in a health context which might offer ways to fight the Covid-19 virus (i.e. by reminding people to wash hands or giving them feedback if they have washed their hands long enough).

What do you think some exciting directions are for research stemming from your study? 

Our study offers a variety of avenues for future research. First, we noticed that only a few studies used digital nudges. As more and more decisions are taken online, this is definitely an evolving area for future research. This is particularly true if you consider that programming digital environments is much easier than physically rearranging cafeteria lines or changing the default of organ donations. Also, more studies could be conducted in Africa, Asia, or Latin America, the latter being entirely ignored by the studies we found. Finally, certain types of nudges are under-researched, such as pre-commitment strategies or feedback nudges (the latter being very surprising to us as feedback mechanisms of all types are very common today).

References

1. Thaler, R. H., and Sunstein, C. R. 2008. Nudge: Improving Decisions About Health, Wealth, and Happiness, Yale University Press: New Haven & London.

2. Sunstein, C. R. 2017. “Nudges that fail,” Behavioural Public Policy (1:1), pp. 4–25.

3. Hummel, D., & Maedche, A. (2019). How effective is nudging? A quantitative review on the effect sizes and limits of empirical nudging studies. Journal of Behavioral and Experimental Economics, 80, 47-58.

4. Weinmann, M., Schneider, C., and vom Brocke, J. 2016. “Digital Nudging,” Business & Information Systems Engineering, Springer Fachmedien Wiesbaden.

5. Nickerson, R. C., Varshney, U., and Muntermann, J. 2013. “A method for taxonomy development and its application in information systems,” European Journal of Information Systems (22:3), pp. 336–359.

6. Ingendahl, M., Hummel, D., Vogel, T., & Maedche, A. (2020). Who can be nudged? Examining nudging effectiveness in context of Need for Cognition and Need for Uniqueness. Journal of Consumer Behaviour.

About the Authors

Dennis Hummel

Karlsruhe Institute of Technology

Dr. Dennis Hummel is a doctoral researcher at the Institute of Information Systems and Marketing (IISM), within the Karlsruhe Institute of Technology (KIT), receiving his doctoral degree in 2019. He holds a B.Sc. in Business Administration from the University of Mannheim and an M.Sc. in Managerial and Financial Economics from HEC Paris. His research focuses on consumer behavior in digital channels, more specifically, guiding consumer behavior using digital nudges.

Alexander Maedche portrait

Alexander Maedche

Karlsruhe Institute of Technology (KIT)

Prof. Alexander Maedche is a professor at the Karlsruhe Institute of Technology (KIT) and head of the Information Systems & Service Design research group at the Institute of Information Systems and Marketing (IISM) and the Karlsruhe Service Research Institute (KSRI). Prof. Maedche focuses his research on designing interactive and intelligent digital service systems. Prof. Maedche’s work is published in leading international journals such as Management Information Systems Quarterly (MISQ), Journal of the Association of Information Systems (JAIS), Business Process Management Journal (BPMJ), Information & Software Systems Technology, IEEE Intelligent Systems, SIGMOD Record, and AI Magazine.

Nathan Collett portrait

Nathan Collett

Senior Editor

Nathan Collett studies decision-making and philosophy at McGill University. Experiences that inform his interdisciplinary mindset include a fellowship in the Research Group on Constitutional Studies, research at the Montreal Neurological Institute, a Harvard University architecture program, a fascination with modern physics, and several years as a technical director, program coordinator, and counselor at a youth-run summer camp on Gabriola Island. An upcoming academic project will focus on the political and philosophical consequences of emerging findings in behavioral science. He grew up in British Columbia, spending roughly equal time reading and exploring the outdoors, which ensured a lasting appreciation for nature. He prioritizes creativity, inclusion, sustainability, and integrity in all of his work.

Read Next

Hospital hallway
Insight

A Nudge A Day Keeps The Doctor Away

Even individuals who are fully aware of the risks associated with certain behaviors, and have the intention to make good choices, struggle to do so.

Notes illustration

Eager to learn about how behavioral science can help your organization?