The Elements of Choice with Eric Johnson

PodcastSeptember 27, 2021
stick man with apples around him

People will think nudging and choice architecture are the same thing. But ‘nudge’ has gotten to be this huge, anything-that-affects-human-behavior kind of a term. It also implies you can nudge or not nudge. The important point about choice architecture is that it’s not optional. Every decision has a choice architecture.

Listen to this episode

spotify button background imagespotify button background image

Intro

In this episode of The Decision Corner podcast, Brooke is joined by Eric Johnson, director of the Center for Decision Sciences at Columbia University and author of the upcoming release, The Elements of Choice. Johnson’s expertise lies in how we make decisions, but also how those decisions are influenced by how our choices are perceived. This conversation details important topics from the book, such as what choice architecture is, and how it relates to choice engines. It also dives into how, if we are aware of how choice architecture works, we can actually use it to our benefit, and make better decisions. Some of the topics discussed include:

  • Differentiating choice architecture from nudges
  • What are designers, and how do they influence others’ decisions?
  • Assembled preferences and how they influence our decision making
  • How choice engines differ from choice architecture
  • The gradual convergence of choice architecture and choice engine design
  • Using choice architecture to our benefit, and to improve or create new interventions

The conversation continues

TDL is a socially conscious consulting firm. Our mission is to translate insights from behavioral research into practical, scalable solutions—ones that create better outcomes for everyone.

Our services

Key Quotes

Who is a choice architect?

We’re all choice architects, we lay out decisions for other people. You have kids, you lay out outfits for them in the morning. That’s being a choice architect. When my wife, who is also a psychologist, actually talks about what movies we should see tonight and names three options, she’s being a choice architect.”

The influence of choice architects on decision making

“Most often, designers are unaware of their effects. You don’t sit there and say, “Well, how are we going to get her to pick the blue outfit?” If you’re a New York City school, you don’t say, “How are we going to get people to choose the right school?” It’s as if you’re saying, “Here’s the menu, pick what you want,” and naively think that there’s no influence.”

How choice architecture can lead us to make better decisions

One of the things I think about is that the choice architecture should be customized. Instead of nudging everybody, you could actually change the choice architecture to get them to do what’s in their best interest.”

Assembled preferences and how they impact our decision making

A psychologist in Iowa in the ’80s did a great study where he had people come into the lab and eat a hamburger. But the hamburger is described either as 80% lean or 20% fat. Now, what’s interesting is that of course they add up to 100%, but people rated the taste of the hamburgers differently. They would pay more for the lean than the fat. That’s a good example of a case where actually we would think we would know which kind of hamburger we would like, there’s an influence.”

Choice engines and how they differ from choice architecture

“A choice engine, I’m going to call it ‘choice architecture on steroids’. We really don’t make choices on paper much anymore. If you go into the store, it’s obvious there’s choice architecture. There’s a certain number of shelves and a certain number of facings. But the power of a choice engine is that it can be done online, virtually, and that lets you do lots of things. One of them is you can customize the choice architecture. So, if I know you’re looking for a sports car, I can actually put the sports cars first in the list of cars.”

How we may have too much control with respect to choice engines

You can sort when you’re on a website. I can sort by price or I can sort by quality. First off, the default sort has more influence than people think. However you sorted by default, that’s going to make a difference. If I sort by price, price is going to look more important. If I sort by quality, quality is going to be more important. There was a classic study that was done at Duke where people actually got to choose wine. In one case, it was sorted by price. Guess what? They got a lot cheaper wine than if it was sorted by quality. They got less wine, but it was better wine. We do that to ourselves.”

Transcript

Brooke Struck: Hello, everyone, and welcome to the podcast of the Decision Lab, a socially conscious applied research firm that uses behavioral science to improve outcomes for all of society. My name is Brooke Struck, research director at TDL, and I’ll be your host for the discussion. My guest today is Eric Johnson, director of the Center for Decision Sciences at Columbia University and author of The Elements of Choice. In today’s episode, we’ll be talking about choice architecture, how to do it well, how to do it responsibly, and how to build good choice engines. Eric, thanks for joining us.

Eric Johnson: Very happy to be here.

Brooke Struck: Please tell us a bit about yourself and what you’re doing at Columbia.

Eric Johnson: I’ve been at Columbia for a while now, but in the field for longer than I’ll divulge. But the thing that’s made me excited is the fact that my research is actually starting to have, or has had, an impact. And the key idea is choice architecture. Let me take a second, if I could, to define that in normal persons’ terms. We’re all choice architects, we lay out decisions for other people. You have kids, you lay out outfits for them in the morning. That’s being a choice architect. When my wife, who is also a psychologist, actually talks about what movies we should see tonight and names three options, she’s being a choice architect.

We’re all choice architects all the time. Companies obviously designing websites, which is what I call a choice engine, or governments laying out things like, how are we going to choose health insurance? These are all places that choice architects go to work. Let me make that a little bit simpler by calling choice architects designers, because that’s what they are, they are hidden partners, they design the places where we make decisions. The key insight is that the way you do that influences the choice people make. That’s what the book is about. The book is a guide to help all of us, who are designers, help other people, all called choosers, make better decisions. So, it’s a step-by-step guide to thinking about that.

Brooke Struck: I like the words that you chose there, that choice architecture influences decisions, it doesn’t determine them. The example that you gave of the parents laying out clothes for the children is a perfect example of that. As the father of a three-year-old girl, let me tell you that I can certainly try to influence her decision by laying out certain clothes, but I can far from determine it. So, this I think is one of the key concepts around choice architecture, is we’re not determining, we’re influencing. Let’s talk a little bit more about choice architecture before we dig into some of the concepts that are in that neighborhood.

The term probably got its biggest boost and prominence from the book, Nudge, published over a decade ago now. For those who are just getting into the field, it’ll be helpful to have a bit more of an explanation of what choice architecture is. For those who are an old hand, I think some agreements on terminology might be helpful for us before we continue. So, what is choice architecture? What does it do and how does it work?

Eric Johnson: That’s a very good question. One of the things that often happens by the way is people will think nudging and choice architecture are the same thing. I think actually the people who wrote Nudge, Thaler and Sunstein, saw them as the same thing. But nudge has gotten to be this huge… anything that affects human behavior kind of a term. It also implies you can nudge or not nudge. Okay? The important point about choice architecture is it’s not optional. Every decision has a choice architecture. You’re going to influence people whether you know it or not. Let me give you one very clear example going back to your three-year-old daughter.

One thing you decide is how many outfits to put out. Now, that’s not an option, you’ve decided. Is it one? Is it two? Is it 20? One of the examples of this is, in New York City, kids choose high schools. You could present them with a set of three, you could present them with a set of 50. In New York, it turns out to be 769. Now, obviously that’s going to influence what people choose, because if it’s not in the set, you’re not going to choose it. So, choice architecture is a set of decisions designers make that are going to influence. Now, what we mean by influence is, probabilistically, some of the time, you’re going to choose something different.

Most often, designers are unaware of their effects. You don’t sit there and say, “Well, how are we going to get her to pick the blue outfit?” And you do not, if you’re a New York City school say, “How are we going to get people to choose the right school?” It’s as if you’re saying, “Here’s the menu, pick what you want,” and naively think that there’s no influence.

So, how does it work is a question. Let’s dig in that a little bit. I think there are two basic things that happen. One is, let’s go back to New York City school. 769, you’re not going to look carefully at all 769 schools. You’re going to somehow figure out how to screen, how to make that set smaller. And that’s going to influence your decision. If you screen by how far schools are from you, you’ll end up with a different set of schools than if you screen by academic quality. And there’s lots of research showing that the way you set, display that set of schools, will actually influence what parents choose. That’s a notion in the book I call ‘A Plausible Path’. You can look at only a subset of the information and the designer helps you figure out what subset, for better, for worse, that is.

Brooke Struck: Yeah. ‘For better, for worse’ is a good pivot point there, right? As choice architects, we’re trying or one would hope that what we’re trying to achieve is to help people to get to where they want to go. One of the points that you raised though is that people have a lot of different goals and those goals are always intention and always in flux. How does that impact our ability to figure out whether we’re doing a good job and whether the architecture is actually operating the way that it’s designed to operate when there are lots of different people out there and they want different things.

Eric Johnson: It’s an important point because most of the time we think of choice architecture as being one size fits all, that is, I have one choice architecture for everybody. And that could be a problem. For example, I have two uncles, one of who is a real party animal should I say, another one who was actually a little bit less fun, but very, very modest. I suspect they’re going to live different lengths of time. They came to me and asked me, “When should I start claiming social security?” Now, what’s a really obscure fact is social security actually pays you until you die, which is great, and you can claim it in the US anytime from age 62 to 70.

About half the people, a bit less, actually choose it at 62. The reason that’s an important decision is you actually get more money the longer you wait. It’s about an 8% increase. So, you might get, for example, $1,500 dollars a month if you retire at 62 or start collecting at 62, and you might be getting 2,000, $2,200 if you wait. Now, the two uncles basically said, “When should I start?” 

Now, I knew one uncle was probably not going to be around as long as the other. So, if I were a choice architect, I might want to try and nudge them both in the same direction. The eldest, they had better options. The partying uncle should have actually started claiming at 62 so he’d get the most money before he went on, should we say. The other uncle, of course, probably should have waited if he could and he’d be better off because he would be claiming it into his 90s. So, the point you’re making is very good, different people have different needs, and so, how do we do that?

One of the things I think about is that the choice architecture should be customized. So, instead of nudging everybody, let’s say, to wait, you could actually change the choice architecture to get them to do what’s in their best interest.

Brooke Struck: One of the questions that arise there is about those interests themselves. For instance, this closely mirrors a conversation that I had about another social security issue in the last few months. One person was saying, “What we want to be promoting is people getting the most money out of the system,” and the other person was of a slightly different philosophical bent and said, “Actually, what we want to be doing is giving people freedom to choose. We don’t want to be intervening in their choices. Their agency and just exhibiting their agency or exerting their agency unimpeded is the most important part of this.” But one of the things that I felt was missing there is that neither of those two individuals was actually asking people what they wanted out of this.

Eric Johnson: That’s a very nice observation. I’ll just take one little quibble with your friend who wanted to give people unimpeded choice. It’s practically impossible to not impede upon people’s choices. For example, social security, they decided to call 65 or 66 the full retirement age. There’s nothing special about that. You’re still going to get an 8% increase each year. What’s fascinating is that basically almost everybody retires at the full retirement age. That says two things. One is, impeding may not be the right word, but certainly the government is promoting retirement at 65 and there’s no option not to.

You have to have a choice architecture. That’s the first thing. The second thing is you’re absolutely right. It turns out social security, I think, is pretty simple. People want as much money as possible, but the important thing is that that’s relatively clear, there’s a right answer. I think your question gets much more fundamental when you’re talking about things where there’s not a clear answer.

Brooke Struck: Yeah. Let me problematize it just one step further for fun. I know this is a very radical concept, but just follow me here. What if the money that we get is actually supposed to enable us to lead a good life and that the money is just an instrument to achieve that. Then that complicates it already, introducing an additional wrinkle here, because then the question is also about the time that I get from retiring earlier. It’s not just about the dollar value, saying, those years between, let’s just say 62 and 65, what is the value of being retired and having the freedom to dispose of my time as I please during those years?

Which as the earliest years of retirement are the ones where you’re also going to be the healthiest and all these kinds of things. So, that really puts a stick in the spokes where all of a sudden it’s not trivial to assess what the “right answer” is for even one individual person. But as complicated as it might be to try to determine what an individual person’s preferences is even in talking to them, surely, we’re even worse off if we’re not even asking the difficult question.

Eric Johnson: That’s right. Let me present one very short dodge and then I’ll get to the meat of your question. Short dodge is, a lot of people have money already saved, and that money is not going to make the 8% interest usually, risk-free, than social security would. So, actually they’re choosing an option which has dominated. They should be consuming their own funds for the years and then collect social security. That’s going to be my dodger, it says that’s a simple way out. The other way is I think you’re right. People have to construct their preferences.

One of the things that’s really quite true is that we don’t know what we want a lot of the time. We have many goals. So, your example is perfect. I want to stop working, not me personally, but one might want to stop working, but one might want more money later. And that trade off, it’s going to be influenced actually by the choice architecture. So, it’s another place where you don’t get a free lunch. You can’t impede. 

I might ask you, “How long are you going to live to?” Because if I’m doing this consulting you’re talking about with someone giving them advice, it’s going to make a difference. I could also ask the question, what year do you think you’ll die by? Now, if you think about that, those are essentially the same question, right? You’re going to name a year. 

Now, it turns out when we do surveys and ask people those two questions, they come up with very different answers. It’s almost 10 years of difference. So, people didn’t die by saying, “Oh, I’m not going to make it.” They think about their aunt, Maude, who died early, they thought about the three cigarettes they smoked when they were in college, they think of the fact that they’re 10 pounds overweight. If I asked you to live to, you might think about the fact, “Well, I did exercise last week.” You might actually have seen a gym recently. And of course, medical science is doing wonderful things. Now, that’s a good example of what we call a simple preference. 

One of the ways choice architecture works is basically it helps you assemble your preferences. Now, you can assemble them for the good or for the bad, but it’s going to be an influence. It’s another way where you have to be a designer, you don’t have a choice.

Brooke Struck: Tell us a bit more about this idea of assembling preferences. I’ve got this idea in mind that there are certain preferences that I have that are really, really stable over time. They’re very clear in my mind. If you ask me today versus tomorrow, there’s not going to be much change. Probably framing effects in this kind of thing are going to be more limited there. But then there’s this whole other range of stuff in my life where I’m not entirely sure what it is that I want and I am, let’s put it, as open to persuasion.

In those instances, I would expect framing effects to be much more impactful. I would expect to see much more variation if you asked me today versus tomorrow, before lunch versus after in this kind of thing. So, what is assembling preferences? How does that function? Well, I’ll come back to this, but I’ll tee you up already to ask, what’s the role of choice architecture versus other things that are going on in our lives to help us assemble these preferences?

Eric Johnson: Assembled preferences means we have too many preferences. And if you think about it, I want it to be svelte, I also want to eat that great piece of cake, and I’m continually deciding which of those things to do. So, lots of the external environment, the way we name things, has that effect. A psychologist in Iowa in the ’80s did a great study where he had people coming into the lab and eat hamburger. But the hamburger is described either as 80% fat… Sorry, that was a very bad hamburger. 80% lean or 20% fat.

Now, what’s interesting is that of course they add up to 100%, that people rated the taste of the hamburgers differently. They would pay more for the lean than the fat. That’s a good example of a case where actually we would think we would know which kind of hamburger we would like, there’s an influence. Sometimes, I suspect when we think we have a preference, we have that preference right then, but it may not be the same. Yes, I hate liver, I’m not going to eat liver, but a lot of preferences are actually assembled.

Brooke Struck: And getting into choice architecture, it sounds like what you’re talking about here is that there’s this whole gamut of preferences that we have internally and they’re always vying for supremacy against one another and that conflict is going on all the time. And some of those moments are moments when we’re in the choice architecture that’s been very nicely set up for us. What is the role of choice architecture in helping us to assemble our preferences as opposed to the hurly-burly of all the other times?

Eric Johnson: Right. Choice architecture has lots of tools and one of the ones that’s obvious is how I describe the options. So, the example I used with the hamburger ground meat, I could call that attribute lean or fat. And that’s a way where it’s actually making the way I retrieve things from memory are different. Same thing with years to live or years to die. That is the same thing. So, even how we describe the options can have a big influence on that. Accessibility in general does. Whenever we see a picture of an ad, it can actually cue things and make them more accessible. So, even simple things like that can influence our choices.

Brooke Struck: The mental model that I described before, where some of my preferences are extremely stable and let’s say somewhat impervious to influence or persuasion and this kind of thing and a whole other set of things that are really in flux, is that an adequate mental model or should I really stop believing of myself that I have any that are as stable as all that?

Eric Johnson: I really do think I will always turn down liver. Although there was a time I was having dinner with a Nobel prize winner and our host was serving liver. I did eat a little, but still, it’s not something that I’m going to do. So, I think that’s right. But a thing to realize is that many decisions we’ll make in life, we don’t make as often as we do for ordering an entrée. Things like how we invest, most people don’t spend their time doing that. They’re much more likely to be thinking about their favorite baseball or football team than they are about their investments.

Eric Johnson: There are people who care about that a lot, but very many important decisions, mortgages, hopefully spouses, are decisions we don’t make that often and are probably much more affected by assembled preferences.

Brooke Struck: Right. Let’s think about the role of choice architecture here, if there are indeed some preferences that are quite stable and not that open to being guided by of course choice architecture. I’ll take a moment here and just say something that I think is also fair. It’s equally the case that we are guiding people’s decisions as that we are helping them to figure out what it is that they actually want. That’s part of what this assembled preferences idea is about if I’m understanding it correctly. How is it that as choice architects we can identify those circumstances or those moments when the preferences are likely to be more stable versus less stable so that we can accommodate for that in our design decisions?

Eric Johnson: Something that I think is important for designers to think about is how do we tell whether we have a good choice architecture or not. I draw the analogy of a flight simulator. So, how do we know whether we have a good cop design or not? Well, we basically put the pilot in and say, “Okay, land in Charles de Gaulle Airport,” and see if they crash or not. We can do the same thing. We could have someone sit in a webpage and say, “Can they pick the right option?” That’s something we do all the time. So we say, “Here’s a set of health insurance policies, pick one.”

And they pick one that’s really bad, and here we can say objectively because it’s worse on every attribute than the others. It’s more expensive, it covers less, there’s high deductibles, then you know somebody is making mistakes. One of the ways we can actually assess… this goes back to a previous question, assess whether or not somebody is making a good choice is seeing if they can actually choose things that are better for them objectively defined. Or we can actually give them an assignment. Like we told the pilot, “Land at Charles de Gaulle,” we can say, “Find the most cost-effective health insurance policy.”

Brooke Struck: When we ask this different question, rather than guiding them and saying, “Go and find the option that meets these criteria,” we ask them to go and find the option that is best for them or that they prefer. Are there specific signs or signals that we should be looking for to say, “Okay, well, the way that we see people behaving in this choice ecosystem suggests to us that actually people aren’t entirely sure what it is that they want, the preference assemblage process is not working effectively because their decisions are inconsistent and these kinds of things.”

What are the tells that we’re looking for that we might need to bake a little bit more preference assemblage into the recipe because we’ve ordered things nicely and we’ve chosen good defaults and we’ve selected a good number of options to put forward in this kind of thing, but people are still making inconsistent choices?”

Eric Johnson: Right. One of the things you could do is change the choice architecture and see if it changes people’s choices. You mentioned defaults, and this is actually an important thing that hopefully I’ve contributed a little bit to. Defaults are essentially what happens when you don’t make an active choice. Probably the most famous piece of research I’ve been associated with was looking at organ donation and what the default was when the state presented you with the option to be a donor or not be a donor. It turns out that has a huge influence. That’s another way of saying people are inconsistent.

That’s a sign that the choice architecture is actually influencing the choice. We’ve done a lot of studies now, and we did what’s called a meta analysis where you look at all the studies that have ever been done and simply changing the default in lots of cases. On average, changes choices about 30%. That is, simply by changing one line of HTML code, I can make an option on average 30% more popular. That suggests that it’s a cue that you need to do something. So, consistency is one of the big hints.

Brooke Struck: Yeah. I like that. As you switch up the choice architecture, if there is a very stable preference underlying it there, it should be more resilient against the changes in architecture than the preference that’s more influx and in the process of being constructed. Just to be clear, the term that we’ve been using up until now, inconsistency, inconsistency often gets a bad rap. You think, “Oh, well, the economically rational agent should be one who is able to consistently make the best choice and will consistently identify that choice whatever smoke screens might be thrown up in the way in this kind of thing.” But actually, inconsistency is not necessarily a bad thing.

Sometimes it just reflects the fact that we don’t have a very strong preference for one thing or another. So, that’s a small crusade on behalf of inconsistency. I want to back up one step here and talk about why it is that you’ve written this book. From our conversations, my understanding is that you’ve identified that there’s a lot that’s been written about choice architecture but not a lot of concrete guidance around when to use it and how to use it. What do you feel the biggest challenges are in the field that this situation has created this lack of concrete direction that you’re now trying to fill that gap you’re trying to fill or this issue you’re trying to rectify with the book?

Eric Johnson: It’s not ‘when do you use choice architecture’, you have to use choice architecture and it can be inadvertent. So, I think that’s an answer to your question, that a lot of choice architects, a lot of the designers are doing choice architecture without knowing they’re doing it. I suspect my wife, because she’s a psychologist, knows how to present choices to get her way. But most of the time, we’re not quite so deeply informed. The reason the book exists is really to help designers do a better job and because they have to do that job.

What research shows by the way is people choose… the reason people who are making the choices aren’t really aware of the effects of choice architecture. But increasingly, you see that the people who are designers aren’t aware of the influences they have. So, in some sense, the book is really is a bit of an awakening to people that designers make a difference now. Choosers should know that, but designers should know that too.

Brooke Struck: You talk about designers and this, I think, is a nice segue into some of the concepts you talk about later in the book. Specifically, one that jumped out to me is this idea of a choice engine. Once again, let’s just get some ideas out on the table so that we all know what we’re talking about. What is a choice engine and how does it differ from choice architecture as we’ve been discussing it up until now?

Eric Johnson: Essentially, choice engine, I’m going to call it choice architecture on steroids. We really don’t make choices on paper much anymore. When you go into the store, it’s obvious there’s choice architecture. There’s a certain number of shelves and certain number of facings. But the power of a choice engine is that can be done online, virtually, and that lets you do lots of things. One of them is you can customize the choice architecture. So, if I know you’re looking for a sports car, I can actually put the sports cars first in the list of cars.

Choice engine essentially is choice architecture that can be customized. It can also teach you, it can educate you as well about the environment. Netflix, the insurance website your government shows you, Amazon, those are all choice engines. I call them engines because they are there to help you make a choice and they can do things that you couldn’t do on paper or in the physical world.

Brooke Struck: When you talk about customization, it sounds like there are two things that are going on there. The first is that it’s more dynamic. For instance, the reordering of options on Netflix or on Amazon happens much, much, much faster and more frequently than the change in a government form, almost like hyperbolically different. But even the arrangement of food on a grocery store shelf, that probably gets shuffled a little bit more often than the organization of a government form, but certainly nowhere near what’s happening in Netflix and Amazon.

There’s a dynamism there, there’s a pace of change which is so intensely different than it might lead to these kinds of quantitative differences, not just qualitative. But the second is also, if I’m understanding you correctly, there’s a personalization there around customization, that it’s not just that it’s more dynamic, it’s also that we’ve got segmentation, that you are getting a different look than someone else is getting.

If you and I walk into Amazon, we are getting very, very different experiences of the Amazon world. Does that cash out what you mean by customizable? Or is there something more that’s missing?

Eric Johnson: No, I think you’ve gotten a big part of it, but it’s customizable based upon what Amazon thinks or what Netflix thinks it is that I want or at least what they will profit from in terms of what I want. Netflix is not there to find you the best movie, they’re there to find you the best movie given how much it costs them to show you that movie. It’s a sort of a joint optimization. They want you to be happy, but they also want them to sell this to be profitable. So, it turns out Amazon customizes the page for everyone, so not only do you and I see different ones, but everybody sees different ones.

And they do many experiments to optimize their choice engine. Each of those little, small pictures that you see for each of the movies is actually been tested to see which ones are the most effective and often most effective for a particular person or at least a particular market segment. It’s actually an elaborate form of personalization that makes everything we’ve talked about earlier look like child’s play.

Brooke Struck: One of the other features of choice engines that you talk about is that there’s personalization not only from that kind of supplier side, but also from the user side. For instance, when I go and interact with an online store or a streaming service or something like this, I can choose which filters I want to apply. So, there’s a sense of control and hopefully more than just a perception of control. There’s the reality of control as well that I can go and apply these kinds of filters and I can customize the choice ecosystem for myself. Do we have the control we hope that we have?

Eric Johnson: Absolutely right. We may even have too much control. Let me explain what I mean by that. A classic case of what you’re talking about is you can sort when you’re on a website. I can sort by price or I can sort by quality. First off, the default sort has more influence than people think. That is, however, the choice architect, the designer, sorted by default, that’s going to make a difference. So, we know there are a couple effects of those default and any sorting. If I sort by price, price is going to look more important. If I sort by quality, quality is going to be more important.

There was a classic study that was done at Duke where people actually got to choose wine. In one case, it was sorted by price. Guess what? They got a lot cheaper wine than if it was sorted by quality. They got less wine, but it was better wine. So, we do that to ourselves and I suspect there’s not a lot of literature that has a big influence and maybe even a bigger influence than we think when we do that sorting.

Brooke Struck: The third element that you mentioned also in your initial introduction of this concept is that it can help us to understand the choices. Through this dynamic exploration, we can improve our comprehension. I think this ties back in a really, really powerful way to what we were talking about before in terms of assembling preferences. If we want to put them on extremes, there are two different things that you might be doing when you’re going and looking in an online shop. One is I already know what it is that I’m looking to buy and I just need to go and find it and purchase it.

And the second is, I’m not sure what I want to buy, I want to go and see what’s available and learn about what’s out there and make a choice or make a decision rather than just put into practice a decision that I’ve already made. Those are at the extreme ends of preference assembly. What is the additional power of these choice engines that allows us to be so much more helpful to users in terms of improving comprehension relative to the more static tools of choice architecture?

Eric Johnson: A very simple version of that is the online review. We look at online reviews so we can see what other people’s experiences with a product is like. Another version of it is just giving you the term. So, if you’re buying something you’re not an expert in, I’m shopping for a knife sharpener, I want to know what the terms mean. And that’s a form of comprehension that a choice engine can provide. This can be a very powerful influence and one that I think largely designers don’t realize. It’s as if the choice engine almost becomes a choice coach.

For example, when I’m 62, to go back to social security, I might want to know what people feel like when they’re 70 and they are actually getting the bigger check. That’s not something a government is likely to do, but actually, I can educate that 62-year-old what is it like to actually be a 70-year-old who gets $1,000 more a month. How do I feel then as opposed to now, which is very difficult for me to know?

Brooke Struck: We were talking, or I mentioned earlier, this difference, this qualitative difference, in the pace of change actually leading to some… Or sorry, the quantitative difference, speed leading to some like qualitatively different outcomes. For instance, if I think about wanting to make a decision about social security for instance, this idea of coaching that you just mentioned, a system that is dynamic at a certain pace can be a coach for me.

A system like a government form can’t be a coach because the pace of change, the timelines of change, are so long that when I as an individual want to make a decision, there’s not going to be any back and forth within the time window within which I’m going to make my choice. So, is that really where this step change comes from, that when you reach a certain pace, you fit within the decision window of an individual and this is when you shift from being just an architecture to being a coach?

Eric Johnson: I think that’s part of it. I’m going to shift us to another example for a second, and one that I… although I don’t actively use, I did a lot of reading and research about. And that is, this choice engine we call dating websites. About half of all Americans actually end up meeting their spouse using such a site. Those sites are choice engines and the people who designed them are choice architects or designers. One of the things that is interesting is how rapidly do you present the options? Excuse me for making people sound like options, but they are.

I contrast a site that was invented by three Korean women because they wanted a site that they could use with a site that you see all the time, Tinder. There’s even a line in the urban dictionary about Tinder thumb, which is the exhaustion and pain you feel by swiping left and swiping right too often. Now, what you’re doing there is going to can be very different than what their website did. It’s called Coffee Meets Bagel. What it did initially was present one option a day. Now, if you think about that, that’s going to make you look much more deeply at that person.

It might be frustrating because you can’t quickly swipe left or right, but do you think you use the same process in those two cases? Coffee Meets Bagel, you’re going to look beyond the picture. You actually might read something before you decide whether or not to write to somebody. What research has shown is in a site where you get lots of options, you look at very few characteristics, probably appearance. It turns out there’s some nice research that shows that women look for men who are taller than they are.

If you’re not taller, your probability of getting written to goes to zero. Now, the more options you have, the more likely you are to do that kind of screening. You end up, I think, very qualitatively different outcomes and quantitatively different outcomes if you’re using one site or the other. So, to your rhythm, one a day lets you look more deeply, one a second, you look more shallowly.

Brooke Struck: I really like that, this idea of positive friction, which is one that I first encountered several years ago, is one that comes up for me time and time again, this idea that speed and efficiency are always to be optimized. There are just lots of good examples where that’s not the case, where slowing things down actually does improve outcomes. 

Eric Johnson: Let’s say that people misrepresent themselves on online dating sites, maybe men actually claim they’re taller than they really are. If you do this screening with literally a sample of liars, the more likely you are to lie, the more likely you are to get chosen and you end up with literally dates that don’t measure up. Speed leads to worse, not better decisions.

Brooke Struck: Yeah, it also reminds me of a hilarious study that I read probably a couple of years ago now looking at the statistical distribution of heights in countries versus the reported distribution on dating sites and wealth, same thing. There are all these kinds of classic tropes that are good for a laugh.

But shifting back to the question that I had in mind there is, you talked about writing this book for designers and it seems like we’re seeing two worlds come together, a world of designers who are much less familiar with the world of behavioral science and behavioral economics, psychology, this kind of thing, that cognitive and social perspective on the human being as a decision-maker.

Designers who are not so familiar with that can really benefit from diving into that world and learning about choice architecture and choice engines and the stuff that you’ve talked about there. But similarly, from the other side, I think that a lot of people coming from the behavioral perspective are less familiar with the world of design. For them, perhaps choice architecture is old hat, but choice engine is a really, really challenging thing because as we’ve been talking about, there are these quantitative differences that really do lead to qualitatively different kinds of outcomes.

And the patterns of thinking that we have and the… not the assumptions that we jump to necessarily, but the hypothesis that we jump to as behaviorists are not really calibrated to those kinds of highly dynamic environments. We might not think about how to approach those differently as testing ecosystems, for instance. I’m wondering whether something that you’re seeing here is that there’s a bit of a shift in the skill sets or priorities from a focus on choice architecture towards choice engine design, and similarly from design coming from a behaviorally naive perspective towards design in a more behaviorally informed perspective, again converging around this idea of a choice engine.

Eric Johnson: I think that convergence is happening. It’s happening as always more slowly than we might like. There’s a concept out there that is actually very close to choice architecture, but most choice architectures don’t know it and it’s called a dark pattern. Dark pattern is for example a website where you see in bright red, very vividly, the option they want you to choose. As I look at my Zoom screen, leave is there in red. I’m not going to leave, don’t worry, but it’s very salient. Other things like mute are not.

Now, as everyone who’s used the web knows, sometimes that’s the option like, “Please give us your privacy”, private information is in red, and there in gray with altered outline is, “No”. Now, that’s a place where most-

Brooke Struck: Or even worse you’ve got this huge flashing, bright red button, “Accept all the cookies forever and give us your first born”, and then the other option in a small, gray box is not even, “No, don’t do these things”, it’s, “Would you like to consult a page that will allow you to read all of the information and make elaborate decisions about what to accept and not to accept?”

Eric Johnson: Right. What I love about an example is it’s about time, right? People’s sense of time is basically, I can get what I want by picking the red button and the equivalent of the blue pill is, who knows how long it’s going to get me through that? So, I think it goes to your point in a very nice way. And designers already that works, but us who studied decision-making have not really explored that space of that. I mean, the decisions of which of those two buttons to click on are not really part of what is choice architecture as we know it now. I think we’re getting closer to understanding that, but that’s a great place for this convergence you’re talking about. It’s starting to happen, but we’re not there yet.

Brooke Struck: Dark patterns also leads very naturally to this question of responsibility, right? Like choice engines are much more powerful than choice architecture and proportionately the responsibility to represent and to do right by the dignity and well-being of the people who are going to be in that choice ecosystem, those up with our fact that ecosystem has to exert. What are the biggest ethical pitfalls around choice engines that perhaps we from the behavioral world of choice architecture didn’t have on our radar before?

Eric Johnson: I struggled with that question for quite a while as I wrote the book, trying to figure out, was there a simple rule that I could advise designers to use? Pretty soon you realize you are in the world of ethics writ large. Choice architecture is no different than lying in this sense, that if you could lie, it’s wrong, and you can use your choice architecture to get people to choose things that will make them unhappy or miserable, but make you rich. It’s not the existence of a choice engine that is critical, it’s the fact that you’re making ethical decisions.

So, the book is not about ethics, but the book does say deciding to use choice architecture is not an option, deciding not to use choice architecture is not an option. You’re going to do it. You have to decide whether you’re going to do it in a way that’s ethical or not.

Brooke Struck: There also seemed to be some ethical opportunities here, which is a term or a concept that I don’t really think about all that much, ethical opportunities. But what we were talking about before in terms of like this coaching function that all of a sudden becomes possible with a choice engine that wasn’t possible with choice architecture, doesn’t that provide us an opportunity to better understand the preferences of users in order to be able to better track whether we are in fact doing a good job of helping them to put those preferences into action?

Eric Johnson: Certainly. Let’s go back to the point you were making earlier, which is that someone’s inconsistencies can be a sign where they’re not sure what the preferences are. So, something that has turned out to be very important in improving the environment is your choice of electricity provider. You might not think about it, but that’s a major source of greenhouse gases. It turns out that if you default people into the sustainable option, they’re much more likely, like a factor of going from 10% to 80%, choosing green electricity.

Now, that’s suggested the people’s preferences for the kind of electricity yet is not that well assembled. It’s something that’s constructive. So, now we have an ethical choice of, do we use that to promote something that I personally think is the greater good, which is reducing greenhouse gases, and suggest that that’s actually something that you should do if you believe that? You also should make it as obvious that you’re doing that saying, “This may do that.” It turns out the tricky part of that is you warn people, “We’re about to default you, ” they think they’re not going to be influenced.

So, it makes the ethical questions you raise even deeper because warning doesn’t get you off the hook, but you’re still going to have an influence and now what you’ve done is told people you’re doing it and they think, “Oh, everybody but not me.”

Brooke Struck: So, to listeners who want to become better choice architects, let’s get very meat and potatoes here. Along both of these dimensions, increasing their effectiveness as choice architects, but also fulfilling their responsibilities to choosers, what can they start working on tomorrow morning to start doing better? And let’s divide this. We’ll start with people coming from the behavioral perspective who are used to the choice architecture world and are now increasingly living in a world of choice engines. What is it that they can start doing to improve their practice?

Eric Johnson: It’s important, A) that they’re obviously aware of what they’re doing, but aware they have a toolbox. People think of things in terms of simple solutions like, “We’re going to frame, we’re going to default,” but there’s a big set, and that’s what the book is about, of things you can do. The names you give attributes, the number of attributes, the way you describe attributes, all make a difference. So, basically to open up the toolbox to be much broader than you think about. And when you’re doing a choice engine, a website, a phone tile, these are obviously places where you have many tools at your disposal, so be aware of the broad set of things you can do.

Brooke Struck: Yeah. And in the echoes of what you’ve just said, recognizing that choice architecture is happening even in places where you might not think of it in those terms. For instance, these dynamic digital environments, these are choice architectures that are very, very rapidly evolving. They’re constantly in flux. Sorry, I lost-

Eric Johnson: And as you point out, they’re much more powerful, so that then for your opportunities and risks are much greater.

Brooke Struck: Yeah. And also that ecosystems that are that dynamic are different kinds of test beds for experimentation than the ones that people coming from academic, behavioral work are typically used to working in.

Eric Johnson: Yeah, we don’t do A/B tests three times a day or three times an hour. When you’re Amazon or Netflix or any web designer, you can do that and see… You don’t have to hypothesize, you see what works.

Brooke Struck: Right. Then I’d ask from the other perspective, people coming from the design side who are used to running three A/B tests a day, three A/B tests an hour for instance, what is it that they can start doing tomorrow morning to improve their practice by really onboarding these behavioral ideas that are perhaps a bit of a new introduction for them?

Eric Johnson: I think one thing is they may realize there’s a short-term consequences like clicks. But the other thing to realize is there’s longer term consequences, which is like what people choose and what they do. So, if I’m talking about signing up in a subscription model, that’s fairly simple. Or if I’m thinking about an Amazon or Netflix, there’s a surface, a shallow criteria, which is, get people to press that button. There’s another deeper criteria, which is you’re actually fundamentally changing that person’s experience of the product.

And let’s think more largely of life. So, if you’re a choice architect and you want to encourage people to choose products that are ecologically sustainable, you have that power as well. That’s the deeper goal I would suggest for folks from the design community.

Brooke Struck: Okay. That’s really helpful. What I’ve seen in the design community also, one of the things that is sometimes missing is having hypotheses to try to understand why it is that certain tests work and the focus on some conceptual or intellectual coherence to what it is that you’re finding in your tests is less intense than the focus you would typically find in people coming from the behavioral side who are trying to put together really a full-blooded representation of these different personas who are going through different choices ecosystems.

Eric Johnson: That’s a great point. I would say that hopefully the two concepts about how choice architecture works in the book, which is the notion of assembling preferences and generating plausible paths, would help them design better interventions. Same thing for, I think, the people from the behavioral science side. When you have more tools, which ones will be most effective in any particular task might be informed by those two ideas.

Brooke Struck: All right. Well, Eric, this has been great. I think that there’s a lot of really meaty material in here for our listeners and I hope that they will enjoy it as much as I’ve enjoyed this conversation. Thank you very much for taking the time to speak with us today.

Eric Johnson: Brooke, this was a lot of fun and actually very thought provoking. I thank you very much for your time.

Brooke Struck: Take care, and we hope to talk to you soon.

We want to hear from you! If you are enjoying these podcasts, please let us know. Email our editor with your comments, suggestions, recommendations, and thoughts about the discussion.

About the Guest

Eric Johnson

Eric Johnson

Eric Johnson is a faculty member of Columbia Business School at Columbia University, the director of the Center for Decision Sciences at Columbia University and author of The Elements of Choice. His research examines how we make decisions, how our decisions are influenced by how our choices are presented and how this intersects with economics and public policy. Johnson’s research has been featured in Nature Neuroscience, and The Wall Street Journal among many others. He previously co-authored two books, Decision Research: A Field Guide, and The Adaptive Decision Maker, and is set to release The Elements of Choice in late October of this year.

About the Interviewer

Brooke Struck portrait

Dr. Brooke Struck

Dr. Brooke Struck is the Research Director at The Decision Lab. He is an internationally recognized voice in applied behavioural science, representing TDL’s work in outlets such as Forbes, Vox, Huffington Post and Bloomberg, as well as Canadian venues such as the Globe & Mail, CBC and Global Media. Dr. Struck hosts TDL’s podcast “The Decision Corner” and speaks regularly to practicing professionals in industries from finance to health & wellbeing to tech & AI.

Listen to next

Matt Wallaert
Podcast

The Tools Of The Behavioral Science Trade: Matt Wallaert

In this podcast episode, we sat down with Matt Wallaert of Clover Health to discuss the field of behavioral science, when to hire a consultant versus an internal team, and helping people find unique and meaningful career paths.

Notes illustration

Eager to learn about how behavioral science can help your organization?