Creating people-centered behavioral public policy with Elizabeth Linos

PodcastApril 25, 2022
cognitive illusion of debias training

In most areas of social policy or at least in public policy areas where we see behavioral science playing a role, we're still learning what works in different contexts. And so, the critical decision that a policymaker has to make is, “Am I willing to take on the risk and the potential reward of looking at the data of testing rigorously and sharing those findings with the world, no matter the outcome?

Listen to this episode

spotify button background imagespotify button background image

Intro

In this episode, Brooke speaks with Elizabeth Linos, Michelle Schwartz Assistant Professor of Public Policy at the Goldman School at UC Berkeley. Drawing from her many years’ experience at the intersection of behavioral science and public policy, Elizabeth shares her insights around how the field has developed and what the future holds for behavioral researchers and policy makers interested in changing human behavior for social purposes. Some of the things discussed include:

  • How low-cost or light-touch nudges sparked an interest in behavioral science amongst policy makers in the early days, and why we need to think further down the funnel to achieve the exact outcomes we desire.
  • Instances where behavioral science can make a real impact, as well as times we need to be ‘real’ about how much it can really affect policy outcomes.
  • Why the real challenge for behavioral scientists and practitioners is in persuading policy makers to adopt a holistic, experimentation approach to behavioral challenges, as opposed to ‘quick-win’ solutions.
  • The case for a people-centered approach to behavioral policy design and why it’s important that researchers pay special attention to the experiences of frontline workers and not just the raw data.
  • Immediate steps that policy makers and behavioral scientists can take in pursuing meaningful projects that address public policy challenges.

Sneak Peek

Moving Beyond Entry Level Nudging

“Early nudging in this space did show a lot of promise on outcomes that were based on things like, Can you take the first step? Can you click on the link to go to the website? Can you engage with the material that the government shared? How can we change the outreach material using behavioral science, so that that engagement is higher, or that take up at that first stage is higher? What we're learning today is that sometimes that translates to the main outcome that we care about, people finding funds or people accessing service, and sometimes it doesn't.”

The Dangers of ‘Over-Selling’ Behavioral Science

“Behavioral scientists have an obligation to not oversell or over promise what is possible with light touch nudges. And I really think that's where we are now, both politically and substantively. If we go to an organizational leader or public manager, and we say, "I'm going to solve your problem and you're not going to have to spend $0," then we're really setting up a false promise.”

Small But Meaningful Effect Sizes

“Sometimes behavioral science gets some criticism that the effect sizes are small, or that it's not enough of a change. But one thing that behavioral scientists have been really committed to from the early days, is rigorously testing things and rigorously evaluating those effect sizes. What appears to be potentially a small effect size might actually just be realistic across all sorts of types of policy levers that we'll be using, in the short term.”

Policy Without The Moral Judgement

“It's a really useful insight to say, "Look, actually, what you're doing is you're keeping people out of the program, because these hurdles are not just a screening tool. They're keeping out people who are most in need of accessing your services due to the psychological barriers that you're setting up. Creating lots of work requirements or documentation requirements is actually hindering participation in these programs.””

People-Centered Solutions

“Nothing that we are uncovering in these behavioral science experiments are truly new insights. They're just uncovering or elevating things that frontline workers already know. And of course, the communities that are most affected by these programs already know. So really, this is a service or an approach to elevating that data, summarizing that data and bringing it to the political debate. But if we're doing our jobs right, the actual insights are coming from the people who know a lot more about this than researchers.”

Taking The First Step

“The best place to start is with a measurable problem that has a behavioral component to it. And what I mean by that is, if you can get to a point where you are really clear in your work that you need people to do something differently that's measurable and actionable. Like I need them to show up to this event or I need them to apply for this job or I need them to fill out this form, whatever the case may be. Something that is truly observable and measurable. Then, scoping out the problem is actually 80% of the work. The behavioral science to actually move the needle can come with reading or reaching out to behavioral researchers.”

Transcript

Brooke: Hello everyone, and welcome to the podcast of The Decision Lab, a socially conscious applied research firm that uses behavioral science to improve outcomes for all of society. My name is Brooke Struck, research director at TDL, and I'll be your host for the discussion. My guest today is Elizabeth Linos, Michelle Schwartz Assistant Professor of Public Policy at the Goldman School at UC Berkeley. She's also a former senior member of the Behavioural Insights Team.

In today's episode, we'll be talking about public policy, behavioral insights and public policy, and how they've evolved since the early nudging days under David Cameron and Barack Obama. Elizabeth, thanks for joining us.

Elizabeth: Thank you for having me.

Brooke: So, you've been involved with behavioral insights for public policy for a long time, a number of years now. In the early days, low cost nudging was kind of sold, in a certain sense, in order to get some quick wins, build credibility, all in the hopes of being able to then dive into these larger, deeper and more transformational projects. Do you feel that we're still in that low cost zone right now?

Elizabeth: Yeah, it's such a good question. And I think the way you framed it is correct. In the early days, part of the reason so many governments were excited about taking on this new idea was because of this promise that was coming at the right time, right after the global financial crisis that said, "Look, you don't need to have very large investments to have large results. If you are thoughtful about your small investments, these low cost nudges, you could still have a disproportionate impact on the social outcomes that you care about."

And I do think that message still holds true in a lot of government agencies across the world. We're still at that stage running low cost trials, tweaking messaging, thinking about the low hanging fruit. But what you'll see amongst the leaders of the behavioral science movement is a real encouragement to think beyond those low cost nudges. So, we're at a point now where we're saying, "Okay, now that we understand insights from behavioral science, how can we use those in the way we legislate or in the way we set policy or in decisions about other types of compliance burdens or requirements that we set up when people interact with our government?"

So, I think we're just at that turning point in terms of behavioral science thinking, but a lot of government agencies haven't really maxed out on the low hanging fruit yet.

Brooke: So, we've been talking a lot about where we are. I suspect that a lot of this has to do with how we're defining 'we' in this conversation. Are there certain groups of countries? You mentioned also certain departments. Are there certain functional areas within governments that you identify as being in different stages along that pathway?

Elizabeth: Yeah, that's a great question. One thing I've always been very proud of and excited about in behavioral science is that you actually see the growth and use of behavioral science concepts across the world. So, a recent count by the OECD suggested that there are more than 200 units across the world dedicated to using behavioral science in government. And so this isn't really an issue that has only come up in the Global North or in the US or in OECD countries.

That being said, it's certainly the case that the first behavioral science teams within government were in the UK and the US. And so government agencies, both in the UK and the US, have built internal capacity to run mini-trials that use behavioral science insights in the day-to-day operations of government work. That being said, we're seeing really cool results from Chile. And now with the COVID crisis, something people use across the world areinsights from behavioral science to nudge vaccine uptake, or better mask wearing and some of those trials are in Bangladesh, Germany and really, across the world.

These types of tools have started to be used. The next stage in this process globally is to think about how behavioral scientists will take on permanent roles in government units, so that it's not just a consultant or a researcher, or a small innovation team, but where behavioral scientists will take on a more central role to public policymaking.

Brooke: Really, I want to unpack that kind of deeper role. But I'm inclined now to just pause and ask youbefore we move off this topic of whether there are some examples that come to mind of instances where certain nudges themselves or certain approaches to designing nudges or interacting with the machinery of government, have shown themselves to be different or not clearly portable from one cultural context to another.

Elizabeth: Yeah, it's a great question. One thing that we, the behavioral science community more broadly, certainly faced when behavioral science projects started being developed is this idea that context really matters, not only in behavioral science, but in any good research that's applied, right?

But if we're doing our job, well, then we are really tapping into an underlying psychological mechanism that, I don't want to say transcends cultural context, but certainly can be applied in multiple cultural contexts. So, we talk about social norms in behavioral science. In the UK, there was a lot of excitement when we found that just telling people that 9 out of 10 people pay their taxes on time is an effective way to get people who are delinquent on their taxes to pay their taxes.

Now, that idea didn't work immediately as well in the federal government in the US. But it was effective in Guatemala. And so, we're still learning in what cultural context do these things replicate.

Brooke: Yeah, I was just going to summarize briefly and say, so it seems like what we're seeing is that we know that social norms, for instance, are going to be an effective mechanism across cultural context. Social norms are important to everywhere you go, but what exactly those social norms are, and what kinds of outcomes you can improve by tapping into those social norms, will be different depending on where you are?

Elizabeth: I think that's exactly right. And it's not only a matter of cultural context, but also a matter of timing, right. So, we know that it's hard to capture people's attention. They have limited cognitive bandwidth. What was a really critical tool for capturing people's attention 10 years ago might not be as effective today. If you're receiving your 100th personalized text message that says, "Dear Brooke, I want you to donate", that's going to look very different today than it did 10 years ago. Not because the underlying psychology is wrong, but because it's not about the exact message or the exact wording. It's really about making people feel like you're creating a personal connection, or making sure something stands out in a sea of information.

So, focusing on the insight, as opposed to the wording, I think, more things will replicate both across time and across cultural context.

Brooke: Thanks for that. Let's get back to the main line of discussion there, which was about this kind of evolution and this deepening of the relationship between the use of behavioral insights and these wider conversations in government. So we've been talking about these 'quick win' projects that were initially sold to get behavioral insights off the ground. You mentioned the economic context just after the financial crisis, almost 15 years ago now.

And you mentioned that this was a really effective value proposition for governments at the time, who were really strapped for cash. And so this idea that there could be small investments that paid outsized dividends was a very attractive one. When we started to see those successes and 'quick win' projects, what did we see further down the funnel?

For example, if there was a nudge that was designed to improve signup rates for a certain kind of social benefit, did that translate into equally impressive results, when it came to the impact of the services that people were encouraged to sign up for?

Elizabeth: Yeah, that's a really important part of what we're learning today. There's a host of really fascinating research that all kind of tackles a similar outcome, which is how do we get people to take up programs for which they're eligible. And in the US context, you can think about that in terms of, take up of a tax credit, or the Child Tax Credit today. But you could also apply that to a whole bunch of different government programs.

The early nudging in this space did show a lot of promise on outcomes that were based on things like, "Can you take the first step?" "Can you click on the link to go to the website?" "Can you engage with the material that the government shared?" and "How can we change the outreach material using behavioral science, so that that engagement is higher, or that take up at that first stage is higher?" What we're learning today is that sometimes that translates to the main outcome that we care about, people finding funds or people accessing service, and sometimes it doesn't.

And so, there's a real push today to not measure the impact at that first stage. Just to give you some examples, we did some work in California, through a collaboration with various state agencies on the Earned Income Tax Credit. And we nudged over a million Californians, we changed the language in the letters that were going out to tell people that they were eligible for the EITC. And we've seen improvements in engagement, people read the material more, they went to the website more, we learned some things about which language was most effective at increasing engagement.

But ultimately, we followed up and looked at whether or not people ended up filing their taxes to be able to access this Earned Income Tax Credit. I didn't see a movement on that front. So, even though people were engaging more and we had overcome that first barrier, there was the second huge administrative hurdle, which is actually filing your taxes, which we weren't able to overcome with this light touch nudge.

Now, that's not always going to be the case, we are seeing some success stories where an early nudge does lead to really successful outcomes down the line. In Denver, we did some work on rental assistance during the COVID crisis. And we did see that the outreach materials not only affected engagement with rental assistance, it actually led to more applications and more funds going out to households.

So, it really is on a case by case basis. But what I think we're starting to realize now in the behavioral science community is that our nudges are effective at some barriers. But we really need to get the barrier right. We really need to understand what the ultimate sticking point is if we're going to design policies effectively that actually capture those touch points.

Brooke: Yeah. I like the example that you shared about the Earned Income Tax Credit in California that engaging with the material, getting people to click, getting people to read, helping people to find resources and get access to information, that's something that a little bit of effective nudging here and there can be really helpful at in terms of increasing efficiency of conversion, right? Getting people who are reading this site to go over to that resource and figure out what's going on.

But there's still this massive barrier that you talked about, which is the tax return itself, which is extremely complicated. And that strikes me as an area where the behavioural insights are not showing themselves as coming up short. Rather, it's like, we just need to dive deeper and actually follow through with the work that we've done on the front level stuff or those initial touch points and go deeper into those more complicated tax points, or touch points I should say, where there's lots and lots of room to polish things up and just make them more user friendly.

So, let's talk about the kind of organizational reality of that. What's needed in order to get beyond permission to do these low cost nudges that are really just about information gaps?

Elizabeth: I think that's a really important question. Organizationally, certainly the light touch or low cost interventions are, by definition, easier to implement because they are light touch and low cost. And my sense is that there's enough of an evidence base now, globally, around the promise of behavioral science or the promise of nudges that leaders in those organizations are willing to kind of take the risk "to try" AB testing different language on their outreach materials and things like that. So that's, I think, a good sign.

But I think behavioral scientists have an obligation to not oversell or over promise what is possible with those light touch nudges. And I really think that's where we are now both politically and substantively, where if we go to an organizational leader or public manager, and we say, "I'm going to solve your problem and you're not going to have to spend $0," then we're really setting up a false promise.

The leaders that are taking on that next stage now have often tried the light touch version. It's usually the first step in a series of efforts to improve service delivery. And so, they know how far we can get with just a nudge and they're ready to askl the bigger question. Maybe that's part of the organizational process or the organizational learning that's necessary to get to those bigger changes, to say, "Look, this is what we can accomplish. This is how far we can go with a light touch nudge. We've tried that, we've moved the needle. Now, it's time to move to the next larger barrier."

And that's really where we are now. And I've been really excited and impressed with how, for example, the Biden administration is thinking about tax credits, where there really has been an effort to say, "Okay, this isn't just about outreach, we're going to think about the compliance hurdles around filing taxes." And now, there's a non-filer portal available for people who don't need to file their taxes to actually access the child tax credit. People who have experience with Public Interest Technology or nonprofits like Code for America that have been working in this space, are collaborating with government agencies to try to make it easier for people to access programs.

And all of that has, either explicitly or implicitly, insights from behavioral science in its design. But I think testing the light touch version first creates the space and the willingness to go deeper with these types of programs. So, I don't think it's an either or. I think it's just kind of moving down a path of readiness for the larger changes. And one of the things you pointed out there that I really latched on to, is this idea that in order to push deeper, you should be starting already with a leader who's probably experienced some of the light touch stuff before. And something else that you that you mentioned kind of towards the end of what you just said, was about this importance of collaboration, the importance of collaboration, I should say, across boundaries. That once we start driving into these deeper transformations, it's not just going to be about behavioral insights anymore. So, there's going to be a technology component.

There will be other actors in the scene that we need to collaborate with. All of whom are bringing, or each of whom I should say, is bringing a different kind of piece of the puzzle that we all need to fit together. One of the things you mentioned though, is that when you oversell, you set yourself up with unrealistic expectations and that can make it ultimately harder to secure buy in later down the line.

How do we strike that delicate balance between selling hard enough, so to speak, to get the project off the ground now and under-promising on the other side of the spectrum where the expectations are easier to meet but we might end up pigeonholing ourselves into only very small projects? It might be hard to get those projects off the ground if that kind of sales proposition, if you will, seems like it's just a little too tepid. I don't want to get too technical, but I think it's partly about how we define and predict effect sizes. So, what I mean by that is, when you're going into a government agency or if you're working with a government agency, and you have a sense of what the current challenge is or the size of the challenge, being able to say quite concretely, "That zero cost intervention is going to increase take up by 10%, if we're lucky. It's not going to close the gap by 50%."

And until quite recently, we didn't really have that data across the board. Stefano DellaVigna and I just published a paper that looks across all nudges that have been done by two of the largest nudge units in the US. One with the White House team, the Office of Evaluation Sciences Behavioral Insights team in North America, and across all nudges that have been run over the past few years since 2015. We can now say, "Look, the average effect of these types of nudges, they're not all behavioral science, but just these types of nudges is about a 10% increase."

So it's positive. It's significant. It's real. But it's not 30%, or 40%. And when we asked people what they thought the average effect of a nudge was, people who would run these trials in applied settings were much better able to predict what the true average effect size was, than people who had just read about behavioral science, or were behavioral science enthusiasts, but hadn't actually run a trial themselves. So that, to me, suggests that if you haven't done it yourself, you expect much larger effects, you're excited by the prospect of much larger effects. And it's the job of people who have run these trials to say, "No, actually, an optimistic success story here is a 10% increase, or one percentage point increase, not solving a large societal challenge or systemic challenge with a light touch nudge."

Brooke: That's interesting. It reminds me of a point that I heard Daniel Kahneman make. I think it was in an interview. And he was talking about statisticians. And essentially, the kind of systemic issue that if you ask statisticians to just pull out a number of how many observations you need, or how many participants you need, for example, in an experiment, in order to detect a certain magnitude of outcome with statistical significance, statisticians tend to underestimate how many people they need to put through an experiment to detect that kind of effect.

It sounds like what you're talking about here is kind of similar. But now we're not talking about statistical significance, what we're talking about is the effect size. In order to really get a sense of what effect size is reasonable, you have to actually do quite a bit of the work. And it sounds like the project that you took on is really interesting, because you're actually doing a systemic analysis of the effect sizes. So that provided this benchmark to see what is a reasonable expectation in these kinds of contexts, because it's not always intuitive.

You were talking earlier about the differences across contexts. I think we are focusing more on differences across cultural contexts. But it strikes me that there's also a really important difference between application contexts, that the kinds of things we might see in public health policy, for instance, would be probably quite different than what we would see in context of individual level decision making and tax ... or individual level financial decision making, I should say, and tax filing and these kinds of things.

So it really does matter which context you're working in and having that experience, but also the output of the kind of work that you've done, which is a systematic collection to let us know what a reasonable order of magnitude is.

Elizabeth: Yeah, I think that's exactly right. And I should say, just in case it's not super clear. This would be true even if we weren't talking about behavioral science. So the kinds of policy challenges that we're hoping to solve are historic. They are difficult. They include a multitude of systems and challenges that people face both at the individual level and at a societal level. And so if you ask someone who works in government, who has a lot of experience in government, whether they think any type of intervention is going to solve the problem, chances are they'll say, "There's no one silver bullet for any of these challenges anyway." 

Sometimes behavioral science gets some criticism that the effect sizes are small, or that it's not enough of a change. But one thing that behavioral scientists have been really committed to from the early days, is rigorously testing things and rigorously evaluating  those effect sizes. What appears to be potentially a small effect size might actually just be realistic across all sorts of types of policy levers that we'll be using, in the short term. No doubt there are kind of bigger systemic challenges or systemic changes that we need to make in any government setting. But in terms of kind of one off interventions, or one off policy levers, it strikes me that we shouldn't expect larger effect sizes with many other types of interventions as well.

Brooke: Right. So, it sounds like we're really getting to this point now where it's time to stop shuffling around the edges of this thing and really dive into the middle. So the topic that I've just been dying to unpack with you is the follow up. It sounds like what we're talking about here is pushing behavioral insights and interventions beyond just the surface. Beyond, for example, how choices are presented. But even getting into the nitty-gritty and often quite political aspect of which options are made available and how the system is designed. What value citizens are seeking through public programs and the kinds of barriers that they're seeking to overcome, or that they need to overcome, in order to access that value.

That kind of transformation takes quite a bit of buy-in from senior decision makers. And as I alluded to a moment ago, it's got a political aspect to it as well, that certainly you don't experience or at least you don't experience at full bore, when we're only talking about how options are presented. Have you seen any promising approaches for getting uptake for entrenching those behavioral perspectives into that deeper consideration of what kinds of options are being made available and actually, in the formulation of policy? What are the approaches that help to get uptake at the big tables where those kinds of decisions are made?

Elizabeth: This is such an important part of the process. And I should say, one thing that we often forget is government is in the business of changing people's behavior or affecting people's behavior in really meaningful ways. So, when we think about mandates or laws or traffic lights, whatever the case may be, the government is taking a stance, a pretty significant stance, on what you should and shouldn't do as a resident of a country, or as a participant in a community. Now, it doesn't always appear that way because some of these things, we're so used to seeing them that we don't really think about them in terms of the government imposing its will on the people. But that's what a stop sign is. And now we're seeing really heated debates about vaccine mandates that are similar in nature, and the government is making a decision to impose a behavioral outcome.

Now, with that in mind, we can start thinking about a shift from just reordering options to thinking about what options we make available. But I do think quite deeply that this is not the job of the behavioral scientists, but the job of the hopefully elected policymaker and the career civil servant that has been thinking about these barriers for a long time. So, sometimes, behavioral scientists can come in and can share thoughts or opinions about these more intractable problems, or what are the bigger changes that we can make, but they only really make sense in the context of a political landscape, a democratic landscape, hopefully incorporating the opinions and voices of the people that are most affected by those changes.

Those are really kind of what a well-functioning government agency should be doing anyway, even if they're not using behavioral science. We're starting to see that happening now across a whole host of government programs where administrative barriers have been traditionally free. So, whether it's SNAP programs or the Child Tax Credit that I mentioned, or TANF or all sorts of kind of assistance programs. The underlying value judgment that existed and has existed in the welfare state in the US has been one that suggests that people who require government assistance need to prove their worth, or need to prove that they're trying hard enough or need to show that they're poor, but not lazy.

There's a lot of moral judgments and stereotypes around people who need government assistance. And so, when behavioral science comes into this conversation about administrative barriers, it's a really useful insight to say, "Look, actually, what you're doing is you're keeping people out of the program, because these hurdles are not just a screening tool. They're keeping out people who are most in need of accessing your services and the psychological barriers that you're setting up, by creating lots of work requirements or documentation requirements, are actually a hindrance to participation in these programs."

So, we're starting to see an interest in removing those barriers, whether it's compliance hurdles, like having to show up to a government agency office, or how we present these programs in terms of work requirements. That's the nitty-gritty, highly politicized and hotly debated topic today that, I think, brings with it insights from behavioral science, but really, it’s a political decision or value judgment about who these programs are trying to serve.

Brooke: That is such a rich insight. And I hope you can expand on that a little bit for us. It sounds like what you're talking about is that having this kind of behavioral review and a bit of behavioral research to help us understand what's driving certain outcome behaviors brings to the surface some of the underlying moral judgments that are perhaps more tacit in the way that programs and policies have been set up. Maybe they were initially set up with that kind of moral discourse being more kind of salient and in the open.

But now, it's maybe not formulated in that kind of language anymore. And all of a sudden, it comes rushing back to the surface when we start asking these questions and start undertaking this research. As a behavioral practitioner yourself, how do you navigate those waters? And how do you find the appropriate space to engage and be a steward of that conversation, and to help those moral precepts to really rise to the surface so that they can be explicitly discussed without, at the same time, monopolizing them and applying your moral lens onto them?

Elizabeth: Yeah. I mean, this is a really hard and really important road to navigate for those of us who work in the intersection of public policy and behavioral science. My sense is, and again, this is appropriate for me, but certainly there are people who have dedicated their careers to advocacy or to other ways of affecting the system. But from my perspective, as an academic, I think there is a lot of value in using rigorous evidence and data to support this process, whatever this kind of political debate is.

So, if there were assumptions in the past, which there have been, there's a lot of research to back up those assumptions. If there were assumptions that said that, "Look, if we make it hard for people to access programs then only the most needy will access them?" Well, that's an empirical question, right? Is that true? Is that not true? Research from behavioral science today suggests that that is not true. The people who are being kept out are not the ones that have other options. They're the ones who are struggling the most with these barriers.

And so, that's an empirically testable assumption that researchers such as myself can not only go and test but can then provide that data back to public managers and policymakers who are trying to design these programs. So ultimately, I think, there is a very important role for bringing the data to the table. At the same time, I think, there's potentially a lot of hubris attached to people who are good with quantitative skills or people who can kind of create datasets. There's no real substitute for lifting the voices of the people who are most affected by these programs. Nothing that we are uncovering in these behavioral science experiments are truly new insights. They're just uncovering or elevating things that frontline workers already know. And of course, the communities that are most affected by these programs already know. So really, this is a service or an approach to elevating that data, summarizing that data and bringing it to the political debate. But if we're doing our jobs right, the actual insights are coming from the people who know a lot more about this than researchers.

Brooke: Right. In our work at The Decision Lab, one of the things that I found is that we run together a lot of what's done in behavioral science with a lot of what's done in design thinking or human centered design. Let me try to clarify in my mind exactly what it is I'm trying to say here. So, we combine these behavioral and design thinking approaches, and it strikes me that both of those things get some traction on what it is that you're just describing, which is namely, bringing the voices of frontline practitioners and especially affected citizens into these policy debates. How much of bringing the voice of those people into the discussion, do you feel is actually the unique province of behavioural insights, by comparison to other approaches, such as design thinking?

Elizabeth: Yeah, it's certainly not unique to behavioral science. I think there's a host of related disciplines and approaches to public policymaking that are really people-centered. And it is not by chance that my lab that I founded and our co-directors really put people at the center of policymaking and research. So, if we start with a people-centered approach, whether it's coming from a design thinking background or from an advocacy background or from an economics background, you come to a similar conclusion which is that our goal is to capture and understand and listen to the people that are most affected. And that's really where good innovation comes from.

Where I think behavioral science has contributed is not just the understanding of the psychological mechanisms, but thinking about how this replicates or is systematic or predictable over time so that you're moving beyond just one context. But you can understand, using a host of different tools, the psychological mechanisms that can predict future behavior with the same econometric rigor or quantitative rigor that we would expect from say, medical trials.

And so really, I think, where behavioral science has contributed to this broader movement to be more people-centered is to say, "Okay, we're going to use both a people-centered approach and insights from psychology that have existed for 50 or more years, but also take quite seriously the commitment to rigorous evaluation, testing, replication and transparency of that testing. So that we're not just learning about one context or really critically understanding one context, but trying to learn about things that are true, or at least useful across different contexts."

Brooke: So, on that note of rigor, when we're thinking about how we can interact with top level decision makers in government, usually those contexts are small in number. They present a lot of variety from one to the next. So, what can we learn from the way we're engaging with senior policymakers to try to develop a bit of a science of how we can deepen this engagement of behavioral insights into government? Or is there a way for us to be as thorough in trying to map out what works for behavioral scientists engaging with government, as we are in the kind of substance of that work when we think about policies and the way that behavioral science can improve policymaking for citizens?

Elizabeth: Yeah, I think what you're getting at is actually an interesting behavioral science question that we haven't yet answered. We can run a bunch of trials, we can do a bunch of tests, we can figure out what works in a given context. But then surely, as behavioral scientists, we understand that just presenting that information to a leader in a public sector context or in any context, really, does not magically mean that that new insight or that new evidence is going to be taken up and used in policymaking.

So, we've spent the past 10 or 15 years developing that evidence base about what works in different policy contexts. There's a second behavioral challenge or behavioral question, which is okay, once you know that something works, what are the behavioral barriers and psychological barriers that policymakers face in adopting that evidence? And that's, I think, a really interesting ongoing question for those of us who work at the intersection of public policy and behavioral science. What's going to make it more or less likely that a new public manager, or a new mayor or agency head is going to take up the evidence that the behavioral science community has created over the past 10 years? And again, that's going to have a host of trials associated with it because I think we're still learning how to do that well. What are the barriers that people face in that kind of political economy sense, as opposed to just the informational sense? So the short answer to your question is, I don't think we know empirically yet. But there are certainly a lot of us, myself included, that are thinking about that question. And thinking about what it takes to increase adoption of evidence within policy making circles.

Brooke: That's interesting. It gets back to one of the earlier points that you made about the appropriate role of the policy analyst or of a behavioural insight specialist. Ultimately there are political questions that are not within our mandate to answer. That there's something about the democratic accountability of someone who's elected to represent a group of people. They need to make the final call on behalf of the people that they represent, because they are the ones who will be held accountable to those voters.

It strikes me that there's a similar point here that when we talk about marshaling this body of evidence to figure out what's going to be most effective in getting behavioural insights taken up. We need to be careful about the way that we formulate the exact outcome that we're looking to promote. What we're not looking to promote, I think, because I take your point charitably, we're not looking to promote this idea that the behavioral evidence comes in, and it always has the last word and the final say, because we're always right. But rather, it's about inculcating a sensitivity to evidence and a certain mindset about how questions are formulated. And where there are pieces of those questions that are empirical, that we develop that reflex to habitually reach for evidence, rather than just put our best guess or our bias of the day, into that spot.

Elizabeth: Yeah, I think that's exactly right. So, there's two ways of thinking about this question. And I think you've hit the nail on the head on this one. One behavioral question that people are trying to answer is, when we do have a strong evidence base, that something works or doesn't work, how do we increase adoption of that finding, right? That policy outcome. 

The second potentially more interesting and sustainable question is, how do we increase adoption of evidence based practices more broadly? So, how do we increase the adoption of experimentation if you think that that's kind of a way to get at these questions? And that's a slightly different question that is equally interesting from a behavioral perspective because it requires thinking through barriers, as wide ranging, as understanding the evidence, risk aversion, thinking about this in a broader context with a lot of other information.

So, there's many potential approaches to try to increase interest and willingness to use evidence or to create evidence. But one thing that is really frustrating to say to a public manager or to a policy leader, is that for most really important issues, we don't actually know the right answer yet. There are very few issues that we've replicated enough and tested in a bunch of different contexts, that we're pretty sure will or won't work before we actually run the tests. And if that's the case, then absolutely, we shouldn't be retesting it. In most areas of social policy or at least in public policy areas where we see behavioral science playing a role, we're still learning what works in different contexts. And so, the critical decision that a policymaker has to make is, am I willing to take on the risk and the potential reward of looking at the data of testing rigorously and sharing those findings with the world, no matter the outcome?

So that might even be more difficult than increasing evidence based practices. But I think that's where we are for a lot of the critical areas that we're currently working on.

Brooke: Yeah. And that's not to say that there are not better or worse hypotheses out there. By better or worse, I mean, more likely to be true or less likely to be true.

Elizabeth: Absolutely.

Brooke: Yeah. We need to, again, navigate carefully here to make sure that we are finding the appropriate middle ground between, “we've totally got the answer, we're so certain of this, there's basically no need to test. I mean, we should test anyway but there's basically no need.” That being way too far in one side. And on the other side, like, “Well, we really don't know what's going on so it's absolutely essential that we test because otherwise, it's just a mug's game.” Again, we're somewhere in the middle. And depending on the level of experience of the person who's leading the project, the set of hypotheses you'll put together are, hopefully, within that set of five different things you're going to test.

You've got better chances that you find one or two that come out of there that are really, really impactful as an intervention. And similarly a more experienced and more seasoned practitioner is somebody who's going to know more of the pitfalls to avoid in conducting the trial and really measuring what the outcome at scale of an intervention is expected to look like. But finding that appropriate balance to navigate that tension, saying like, "Well, we know quite a bit, but not everything. And we certainly want to know more before we proceed." It's a very, very difficult balance to maintain.

Elizabeth: Absolutely, absolutely. And that's why building trust between researchers and the public sector leaders who are ultimately accountable, as you noted, for these projects is really key, right? So, both sides need to trust that the other side has insights and experience and expertise about what's likely to work and what's likely to fail. And using the expertise from both sides to co-design a trial is really where I think we can make a lot of progress on some of these issues.

Brooke: Okay. So, let's wind way back now. We've talked about a lot. But where do we start? So, for somebody who's been listening to this and saying, like, "Okay. This conversation really nicely encapsulates and expresses these tensions that I'm feeling in my work and wanting to move forward to deeper engagements of behavioral insights into policymaking." What is something concrete and practical that they can start doing tomorrow morning to start working in that direction?

Elizabeth: That's a great question. I mean, the obvious answers are to listen to podcasts like this one and reach out to researchers who are doing this work. So I won't say that. But certainly, my team at The People Lab and other teams across the country and across the world are excited to partner with public sector leaders or public managers who are struggling with these questions in their day-to-day work. So there's a lot of demand from both sides. And I think that's an exciting time for the field.

Beyond that, my sense is that the best place to start is with a measurable problem that has a behavioral component to it. And what I mean by that is, if you can get to a point where you are really clear in your work that you need people to do something differently that's measurable and actionable. Like I need them to show up to this event or I need them to apply for this job or I need them to fill out this form, whatever the case may be. Something that is truly observable and measurable. Then, scoping out the problem is actually 80% of the work. The behavioral science to actually move the needle can come with reading or reaching out to behavioral researchers.

But scoping out what the actual problem is, in a behavioral way, is quite hard and requires a lot of insights into the specific context. Just to give you a sense of what this isn't, because that might sound obvious. It's different than saying, "I want people to understand that this is valuable, or I want people to value this thing, or I want people to care about this issue more." Those are all potentially critical parts of the problem but are not an observable behavior. And so, going from a big idea to a specific actor and a specific behavior is something that takes some thinking. And usually, the practitioner in charge of that area already knows what that challenge is, already knows what that barrier is, and can kind of scope out a problem in those terms before reaching out to behavioral scientists that can support this work.

Brooke: And you mentioned earlier, relationships as well, and trust. So, it strikes me that relationship building is probably also something that you can start working at, starting tomorrow morning. And in the spirit of wanting to identify specific behaviors, and to be really kind of concrete about it beyond just these dispositions of like, I want so and so to heed more of what I have to say. Especially if it's early stage relationship formation, something like, I want to reach out to more people who are involved in the policy decisions that I'm working on, and that I care about in this kind of thing. I want more of those people to say yes to having coffee and having an informal chat about it.

Elizabeth: Exactly. That's exactly the kind of measurable behavioral step. And I should say, and perhaps this is pretty clear, what I didn't say is pick your favorite behavioral nudge and then go find a partner. So one thing that people get excited about once they've taken a course in behavioral science, or once they've read Nudge is, they get excited about the nudge itself as opposed to the outcome that we're trying to move. And so, that's certainly a recipe for disaster when it comes to designing these projects.

Brooke: Yeah. Fall in love with the problem, not the solution.

Elizabeth: Exactly.

Brooke: Yup. All right, Elizabeth, this has been wonderful. Thank you so much for the insights and the time that you shared with us today.

Elizabeth: Thank you so much for having me. This was a fantastic conversation.

Brooke: Thanks. Hope to talk to you soon.

Elizabeth: Bye.

We want to hear from you! If you are enjoying these podcasts, please let us know. Email our editor with your comments, suggestions, recommendations, and thoughts about the discussion.

About the Guest

Elizabeth Linos portrait

Elizabeth Linos

Elizabeth Linos is an Assistant Professor of Public Policy at UC Berkeley. Her research focuses on how to improve government by focusing on its people. Specifically, her studies consider how we can improve diversity in recruitment and selection, how to support front line workers against burnout, and how different work environments affect performance and motivation in government. Her research has been published in academic journals including the Journal for Public Administration Research and Theory (JPART), Public Administration, JAMA, the British Medical Journal and others. Her work has also been highlighted in media outlets including the Harvard Business Review, The Economist, Governing magazine, and Slate.

As the former VP and Head of Research and Evaluation at the Behavioral Insights Team in North America, she worked with city governments across the US to improve programs using behavioral science and to build capacity around rigorous evaluation. Prior to this role, Elizabeth was a policy advisor to the Greek Prime Minister, George Papandreou, focusing on social innovation and public sector reform. She has also worked for the Jameel Poverty Action Lab (J-PAL), evaluating and designing innovative social programs in Bangladesh, Morocco, and France. Dr. Linos holds a PhD in Public Policy from Harvard University, where she also completed her A.B. in Government and Economics, magna cum laude with highest honors.

About the Interviewer

A man in a blue suit and red tie smiles while standing indoors, surrounded by office plants.

Dr. Brooke Struck

Dr. Brooke Struck is the Research Director at The Decision Lab. He is an internationally recognized voice in applied behavioural science, representing TDL’s work in outlets such as Forbes, Vox, Huffington Post and Bloomberg, as well as Canadian venues such as the Globe & Mail, CBC and Global Media. Dr. Struck hosts TDL’s podcast “The Decision Corner” and speaks regularly to practicing professionals in industries from finance to health & wellbeing to tech & AI.

Listen to next

Notes illustration

Eager to learn about how behavioral science can help your organization?