How Behavioral Science Informs Policymaking


At TDL, our role is to translate science. This article is part of a series on cutting edge research that has the potential to create positive social impact. While the research is inherently specific, we believe that the insights gleaned from each piece in this series are relevant to behavioral science practitioners in many different fields. As a socially conscious applied research firm, we are always looking for ways to translate science into impact. If you would like to chat with us about a potential collaboration, feel free to contact us.


Nudge units are lauded around the world for their insight into human behavior. They are also considered pioneers of a rugged, empirical mindset that analyses policymaking through a critical lens. However, it is essential that we continue to monitor the supposed “efficiency” of these interventions against concrete goals. It is easy to apply randomly controlled trials without fully considering the outcomes that one seeks.

Dr. Sarah Ball is a former policymaker who has transitioned to academia to pursue a deep understanding of these problems. We invited her to join us at The Decision Lab to understand exactly how scientific investigation makes its way into the policies that have sway over all of our lives.

She recently completed her PhD at the University of Queensland, within the Institute of Social Science Research. Her doctoral research included an ethnographic study on a central behavioral insights team with the aim of better understanding barriers and facilitators to the use of behavioral insights in Australia. She now works as a research assistant at the University of Queensland on multiple projects which are exploring how knowledge and evidence influence and shape the actions of public servants – both at the front line and in the halls of government.

A full version of the study discussed below is available here:


Nathan: How would you describe the focus of your research to a general audience?

Dr. Ball: My academic career was very much borne out of my previous career as a public servant in the Federal Government in Australia. I was a policy officer working on social issues like child protection, indigenous governance, and as a research officer on several major policy evaluations. I became interested in the emerging work on behavioral insights, especially the use of nudges and randomized controlled trials (RCTs). I decided to do a Ph.D. in order to better understand how they might be harnessed to facilitate better policymaking. What I found was that better policymaking is actually a very complex and contested area – and that there aren’t any simple answers or quick fixes. As much as we may wish there were! Following this, I have focused my attention on understanding how ideas like behavioral insights are translated by policymakers – both public servants and government – into practice.

Nathan: How do those themes inform your specific research question?

Dr. Ball: This paper specifically focuses on one component of the results emerging from my research –that a behavioral insights approach appeared to prioritize the promotion and use of RCTs. While there is nothing inherently problematic with encouraging the use of more rigorous methods, my findings raised questions about whether a focus on the method of evaluation was influencing what projects these teams were working on, and therefore shaping how behavioral insights were informing policy design and implementation. The research question explores the potential impact of prioritizing trials for the continued growth and relevance of behavioral insights teams in government policymaking.

Nathan: What did you think you’d find, and why?

Dr. Ball: My research was exploratory so I really didn’t know! I knew behavioral insights were increasingly popular, and that they comprised multiple areas – the use of RCTs, nudging, and broader concepts of ‘what works’ in policymaking. What I wanted to know was how people were using it and help build a greater understanding of the practice behind the theory. My research revealed that RCTs were a major focus for the team, sometimes to the detriment of other aspects of the behavioral insights ‘framework’. This led me to seek out ways to further interrogate this finding and what it might mean for the practice of behavioral insights more broadly.

Nathan: What rough process did you follow?

Dr. Ball: As my research was a single case study, I collaborated with my coauthor Brian Head, who had recently undertaken an interview-based study with four behavioral insights teams in Australia and New Zealand. He was able to share his early results and this, similarly, appeared to reflect a prioritization of RCTs and several challenges emerging from this. Our paper was not a comparative paper in the standard sense, instead it was an exploratory piece. As behavioral insights teams grow in popularity and influence, it is important to use the empirical data that exists to interrogate practice. To date, this is a very small number of studies (for example, see Feitsma 2018, 2019, Einfeld 2019, Ewert 2019). It was our hope that this data would inform governments and behavioral science practitioners and future research, not provide a definitive model for prediction or explanation. We used an approach of ‘abuction’ – puzzling out. As described above, this included going back and forth between our research, the theory, and the literature to discern patterns of interest. These patterns are what constitute the themes in thematic qualitative analysis.

Nathan: What did you end up finding out?

Dr. Ball: We recognized a preference amongst the teams for selecting projects on the basis of their viability for trial. Also, while the teams spoke about undertaking early-stage qualitative and contextualizing work, this was far less likely to be prioritized than a trial. Taking this knowledge, we were able to then point to the literature which highlights the many challenges that emerge when policymakers prioritize one method or instrument. RCTs, in particular, carry a number of risks in policymaking and present a unique set of challenges. First, they can risk claims of technocracy.  

Dr. Ball: RCTs can be exclusionary, both in terms of the technical knowledge required to understand the process and in terms of democratic engagement with the problems and solutions. Second, the use of more technical language, requiring a degree of expertise for analysis, can lead to a decrease in transparency for the public, and even within the government itself. Finally, trials-based policy advice can risk devaluing experiential knowledge, both from front-line service and the experience of weaker actors. Often those promoting RCTs reject experiential knowledge as anecdotal, and therefore of limited value. In the case of policy, this can be very problematic, particularly given these actors will be responsible for implementation and/or the recipients themselves.

Nathan: How do you think this is relevant to an applied setting (i.e. in business or public policy)?

Dr. Ball: The paper serves as a reminder to these teams that pluralism and pragmatism are an inevitable component of working in any politicized environment, especially politics. More specifically, this paper, and my research more broadly, points to the critical importance of undertaking the early-stage contextual research to understand the behavior before determining the outcome to be trialed. This can serve to highlight the ethical and normative dimensions of the project and minimize the risk of rolling out an ‘effective’ but undesirable intervention.  

Read Next


Policy And Social Behavior During A Crisis: Faisal Naru

In this podcast episode, we are joined by Faisal Naru of the OECD to discuss policy and social behavior during a crisis. Some topics include how the COVID-19 crisis has altered behavior and policy at a variety of scales and contexts, the role of trust in institutional effectiveness, and the relationship between expertise and effectiveness in policy.