Giving People The Tools To Nudge Themselves


At TDL, our role is to translate science. This article is part of a series on cutting edge research that has the potential to create positive social impact. While the research is inherently specific, we believe that the insights gleaned from each piece in this series are relevant to behavioral science practitioners in many different fields. At TDL, we are always looking for ways to translate science into impact. If you would like to chat with us about a potential collaboration, feel free to contact us.


The Decision Lab is a socially conscious applied research firm that aims to democratize behavioral science. We aspire to share this essential knowledge with a wide audience, with the hope of reaching the ears of critical decision-makers. With this goal in mind, we reached out to Samuli to connect his important work with a broader audience. Too often, research does not naturally reach the people that need its insight the most. This piece is part of a series that aims to bridge that gap.

Since Cass Sunstein and Richard Thaler introduced the idea of “Nudging” citizens towards decisions in their own best interests, the concept has been a contentious one. Is it ethical to decide how people will think? Is anyone in a position to make these choices for someone else? Samuli Reijula recognizes these concerns. Samuli and his colleague, Ralph Hertwig, Director of the Center for Adaptive Rationality and Max Planck Institute for Human Development, have proposed novel ideas to overcome some of these challenges. Putting choices back in the hands of individual decision-makers is indeed an admirable goal.

A full version to some of Samuli’s work is available here:


Nathan: How would you describe the focus of your research?

Samuli: We study how findings from behavioral science research can be used to help people to deal with self-control problems. The core idea is to turn so-called nudging interventions into tools that people themselves can use to help them reach their goals: instead of fighting temptations with sheer willpower, self-nudges rely on strategic manipulations of decision situations (i.e. choice architecture) so as to avoid the temptation altogether.

Nathan: How would you explain your research question to the general public? 

Samuli: We think that psychologists and other behavioral scientists have a lot to offer in the policy field, previously often left to economists. The nudge program has been central in transferring behavioral policy findings into concrete policy contexts. That said, nudging has also met with resistance, and concerns have been voiced both about the efficacy and ethics of nudging. In our research, we ask whether self-nudging can help with such ethical concerns, as well as extend the scope and persistence of behavioral-science informed interventions. 

Nathan: How did you go about tackling these problems?

Samuli: Ralph Hertwig, together with his colleagues, has developed a policy program called boosting. Boosting interventions aim to give people the knowledge needed to build competencies that help them make better choices in various domains of their lives (e.g., risk literacy, financial planning, healthy food choices). We realized that many nudges can also be turned into boosting interventions: by informing people about self-control challenges and nudging-based solutions to such challenges, we could help people become ‘citizen choice architects.’ Whereas original nudges are top-down interventions where a public policy-maker implements changes in people’s decision environments, self-nudges can strengthen agency and self-control by making people themselves aware of the links between properties of their environments (e.g., positioning of food items in a cafeteria or kitchen) and behavior (sticking to your diet or eating that chocolate bar) as well as providing them with efficient ways of changing those environments for the better.


The AI Governance Challenge

Nathan: How do you think this is relevant to an applied setting?

Samuli: Our main aim in the article was to introduce a new approach to behaviorally informed policy — self-nudging. Although the evidence collected in various areas (e.g., nudging, boosting, behavior change, behavioral therapy) gives us reasons for optimism about the potential of self-nudges, it is, of course, early days for self-nudging, and in order to count as evidence-based interventions, we need more data about the efficacy of self-nudges. 

We don’t suggest that self-nudges should replace more traditional policy tools like nudges, financial incentives or regulation. Instead, we consider self-nudges as an addition to the policy maker’s toolkit. 

Nathan: What do you think some exciting directions are for research stemming from your study? 

Samuli: In addition to providing new policy tools, we think self-nudging can also be a way of improving our ‘psychological literacy’, i.e. understanding of how our minds function. In particular, I think many of us have an overly intellectualized view of self-control. We often feel bad about our weak willpower and inability to resist temptation. We are embodied beings, often undecided and torn between different desires and urges. That we should accept. But research does not suggest that people’s willpower would have recently declined, perhaps to the contrary. Instead, we spend more and more time in highly designed environments where we are constantly nudged by, for example, advertisements and smartphone apps that compete for our attention. 

Samuli: We see self-nudging as an important means for helping people understand how the often apparently insignificant aspects of our environments steer our behavior, and how learning to design our own decision environments helps us to take some of that power back.

How We Can Nudge Ourselves To Save More

Lately, I’ve been reading more about the psychology of poker, and how it can help with navigating uncertainty and making good choices under pressure. The game is an excellent metaphor for several facets of life in which we handle the unanticipated. One that comes to mind is our financial savings. In a time where the uncertainty of simply existing is more apparent than ever before, it frightens me to know that the majority of North Americans are not financially prepared for the unexpected.

Nearly half of all Canadians between ages 18 to 44 don’t have money set aside to cover emergencies.1 Many are living paycheque to paycheque, as 39% of Americans would be unable to pay an emergency expense more $400, and would be forced to use withdraw their retirement savings, incurring a financial penalty.2 While behavioral science has focused a great deal on encouraging long-term saving habits, I wonder to what extent we can encourage saving behaviors in the short term.

The benefit of short term savings

Short term savings are increasingly recommended by financial advisers and policymakers.3 An emergency fund is a savings account dedicated to unexpected expenses—ideally 3 to 6 months (sometimes up to 1 year) of living expenses.4 Emergency funds boost our feeling of financial safety which helps us make better financial decisions. Specifically, experts from Harvard, Yale, and Brigham Young University say this type of savings helps us overcome mental accounting — which is when we value economic outcomes in a way that makes us susceptible to irrational decision making.2

Present bias – Why we value immediate returns

But, it isn’t entirely surprising that the majority of us don’t have emergency savings. Saving money is “at the nexus of just about every behavioral bias we have”, according to Kate Glazebrook of the Behavioural Insights Team. Present bias and loss aversion are two predominant biases that distort our view of our finances and get in the way of optimal decisions. Yet, there are strategies we can implement to improve us overcome these biases.8

Present bias describes our tendency to value things that are near in the future. Examples include work, people, and our perception of the value of money over time,9 even if this time frame is only 15 minutes. The famous Stanford marshmallow experiment reminds us that when giving the option of one marshmallow now, or two upon waiting 15 minutes, most children won’t wait.10

Closely related is the concept of hyperbolic discounting, which explains we prioritize immediate rewards instead of later rewards, even if the later rewards are greater. It’s a wide-ranging problem, as the majority of us have these biases, with studies finding 74% of people will choose short-term gains over the long-term.11 The bias has a direct effect on our financial decisions, as those with present bias were found to choose a lower savings rate.12 This bias can have a detrimental impact on our future financial success, as it gets in the way of the benefits of compound interest in our savings.


The AI Governance Challenge

When making financial decisions, it is possible to overcome present bias. One useful tool is the act of imagining your future self. In several different studies, participants who were asked to consider their future selves engaged in improved behaviors such as enrolling in an automatic savings account, choosing long term rewards over the short term, and scoring better on a financial literacy exam.13 Using these tactics to overcome present bias is very beneficial — a study found that those with less present bias picked savings accounts with higher returns in the long term.14

Make a promise to yourself. It will boost savings

Actively committing to savings also helps combat present bias. A study found that participants who reflected on their saving goals gave a higher initial contribution and saved more throughout the experiment. The explanation for this result is that people want to be consistent with their words, intentions, and actions — this effect is stronger when we make these promises explicit by writing them down or declaring them publicly.11

In contrast, we prefer to avoid situations in which we have to lie. A different experiment found that individuals were less likely to commit to a savings account that allowed early withdrawals without a penalty as long as they declared a “real financial emergency”. Participants feared they may have to lie to make a withdrawal, so opted out of contributing in the first place.14 By considering our future and making an active, realistic commitment to our goals we can overcome our tendency to save less.

These small behavioral tactics have created great results. The “Save More Tomorrow” plan in the US developed by Richard Thaler and Shlomo Benzarti directly targets present bias by having people commit to saving in the future.15,16 In just over 3 years, participants in the program have increased savings from 3.5 percent to 13.6 percent.15 Although present bias is a problem for most of us, simple tactics like considering our future selves and actively committing to our savings can have a tremendous impact on our financial behavior.

Loss aversion – How saving can feel like more of a loss than a gain

Loss aversion is another bias that inhibits our ability to save. Loss aversion states that we feel a loss more than we feel a gain, even if the loss or gain is the same amount. Therefore, putting money away every month may feel more like a loss as opposed to a gain down the line. Since we are psychologically wired to avoid the feeling of loss, we may avoid saving.8

Automation is a fantastic tool to overcome loss aversion. Its benefits are best seen through automatic “opt-in”, a widely known and incredibly successful nudge that policymakers use to influence behavior.

Nations like Australia and the UK have implemented mandatory opt-in programs for employee pension savings plans. Essentially, all employers are required to automatically enroll their employees in a pension program, with the allowance of opting out if the employee chooses to do so. The UK launched its auto-enrolment pension program in 2012, and by 2015, saw an increase in participation from 55% to 88%.17,18 Most of that growth was within 3 years of its full introduction in 2015. Automatic enrolment has the same benefit at the employer level as well. Vanguard research ran a study in 2015 on US employers with auto-enrollment policies, finding that participation rates rose from 47% to 93% with auto-enrolment, and 8 in 10 participants increased their amount of contribution.19

Political nudges for automatic savings have been effective nationwide for pension savings. Experts are encouraging similar automatic enrollment for emergency savings, yet these may take time. So, can we nudge our savings for the short-term? The truth is we can, and automation is a useful tactic to help.

Technology and automation help nudge ourselves to better personal savings

“Pay yourself first” is famous advice used by several financial experts, including the famous investor Warren Buffet.20 The concept is to take a portion of your paycheque and save it before you spend the rest. Loss aversion makes this hard to implement. However, just like “opt-in” works for pension savings, automating your personal savings can help you encourage saving behavior without thinking about it.

Technology is our best friend when comes to setting up behavioral nudges for savings. Bank accounts allow automatic transfers for bill payments and for savings accounts that automatically organize your finances on payday.21 But this hack is only useful for those of us with regular paycheques, not the 40% of Americans that aren’t salaried employees.16 Now, some banks offer tools to set up automatic transfers per deposit. Additionally, “round-up savings” and automatic transfers of small amounts are becoming very popular, especially among younger generations. Certain apps even allow you to put a visual picture on your savings account.22

This technology helps reframe savings to trick us into saving more. Little research exists for the validity of these technological advances, but what does exist is encouraging. A recent study asked participants the question: would you give up $5 a day, $35 a week, or $150 a month? Framing the deposit in a daily amount quadrupled the number of people who opted to save money.23 Chime, an American neobank, launched a new automatic savings program that saved money based on spendings. With the program, they claimed that users saved almost double what they usually save, and individuals enrolled in a program that saved based on spendings and earnings saved almost 3x as much.24 The technological tools that exist have a tremendous impact on boosting our savings behavior and can be useful for overcoming the cognitive biases that interfere with our ability to save.

To summarize, cognitive biases like present bias, hyperbolic discounting, and loss aversion may hold us back from engaging in optimal saving behavior. The following strategies can help us overcome these biases:

  • Imagine your future self – picturing your self in the future will help you get a hold of present bias and think about the benefit saving now will have in the long term.
  • Make saving goals explicit – promising yourself to save will further increase the chance you have
  • Automate everything and anything – just like automatic opt-in programs skyrocketed pension savings, automizing your savings will encourage you to save more without even thinking about it.

Similar to poker, there are many factors in life that we can’t control. Yet, we can implement measures to prepare for the unexpected. Short-term savings is one important way to do so, yet our behavioral biases get in the way. Policy “nudges” have done great work in with automatic opt-in to help boost retirement savings nationwide. While we can’t rely on governments to help us boost our short-term saving, we can implement cost-efficient and simple tactics to help us save more for the future and make better financial decisions overall.

TDL Perspectives: Encouraging Social Justice With Behavioral Science

Sekoul Krastev, a managing director at The Decision Lab, sits down with Dr. Brooke Struck, our research director, to discuss his vision for the organizational goal of bringing about social justice achievements through behavioral science. Some topics discussed include: 

  • Trying to define social justice and its importance
  • How culture impacts efforts towards social justice
  • The behavioral science pitfalls that hinder social justice
  • The assumptions of most people who are uninitiated in behavioral science
  • The struggle of de-biasing training and the effectiveness of educating about non-rational behavior
  • Recognizing the neoliberal tendencies of behavioral solutions, and crafting ideas about how to circumvent them
  • Research-based opportunities to improve efforts towards equity
  • A sketch of behavioral science policy for social justice
  • The benefits of behavioral science in the face of how challenging it is to address systemic issues

Sekoul: Thanks for sitting down with me Brooke. I want to begin our discussion by asking you to give a tentative definition of social justice. 

Brooke: I would define social justice as the fair distribution of opportunities—and probably outcomes, perhaps to a lower degree—across society, at the individual and group levels. These resources should be fair along various dimensions, including health, wealth, education, power, and various dimensions of rights. At a minimum, this means a fair distribution of opportunities, such that one individual has the same opportunities as any other, all other things being equal.

Sekoul: Why do you think that fair distribution is important?

Brooke: Other than the moral imperative that things ought to be fair, which is probably the most important reason, there are other dimensions as well. One that comes to mind is the stability of society. If there is a shared perception of unfairness, the status quo is threatened. This is not to say that resources and opportunities must be equally divided. The way that resources and opportunities are divided, however, must reflect how most people believe it ought to be. That will determine how stable the society is. A state of general fairness will engender support for the  status quo. Whereas, when there is a broad sense of inequality, there will be unrest and the status quo will be unstable. This, in my opinion, can lead to revolution.

Sekoul: Is there a unified wisdom about how to achieve social justice?

Brooke: I’m not sure there is a unified wisdom about that; in fact, it’s very culturally specific. The value placed on equality of outcome or opportunity varies significantly between cultures. In a Western context, we like to think that we have certain inalienable rights. Other groups still don’t agree with that idea of broad-based equality. For instance, Thomas Piketty, in Capital and Ideology, discusses the emergence of property rights. Property rights are something that typically in the West we consider to be universal. Everyone has an equal right to own things. Previously, that was not the case. And even the concept of property around the world differs from one culture to the next.

This idea of universal human rights, for instance, assumes an equality of all humans with all others, which is, itself, a cultural artifact. It’s an outcome of our history that we believe, to borrow the famous words, that “all men are created equal,” which is not necessarily a given. Human biology is not hardwired to affirm universal equality, rather, it is only one of many possible cultural outcomes.

Sekoul: What behavioral science pitfalls do you think might exist in our society in the West, or at least in North America, let’s say, that would prevent social justice?

Brooke: First, people expect that they have a great deal more control over their own preferences and experiences than they actually do. And one of the ways in which behavioral science gets pulled into that is something like unconscious bias training. Unconscious bias training relies on the tacit assumption that once someone has explicitly learned about a bias, they will be able to overcome it because they are aware of its effect. There’s good evidence to suggest that we should not have that much faith in the power of our conscious mind to affect the subconscious infrastructure on which consciousness is built.


The AI Governance Challenge

Nonetheless, there is value in being able to acknowledge biases after the fact. It may not prevent falling prey to the bias in the moment; however, acknowledging its effect after the fact may still raise awareness and, as a result, help with advocating for action to negate its effect in the future. Perhaps that means increasing support for nudge policy, etc.

Sekoul: Right, that’s an interesting insight. What do you think some opportunities afforded by behavioral science might be in regards to pursuing social justice?

Brooke: Behavioral science has this myth, that once you know about it, you’ll be able to overcome it. There is research that actually points us in a different direction, which is that in those cases where we do recognize that our subconscious biases contribute to perpetuating injustice, even if just raising it to conscious awareness is not enough, there are other things that we can do. We should not rely on our intuitions and on our tacit judgments in cases where those can be demonstrably shown to be problematic.

However, other than our intuitive judgments, there are many solutions. One, we can cut off sources of information that tend to feed our unconscious biases. For instance, removing names and photographs from CVs in a hiring process can help to overcome some biases related to race and gender. However, that also tends to only get at one of the more superficially visible sources of unconscious bias.

For instance, there is good research demonstrating that inequality runs much deeper than just your name and your face, inequality probably runs as far as not necessarily determining (but at least influencing) the kinds of schools that you went to, the kinds of networks that you have, the kinds of professional experiences where you have had to demonstrate your capabilities. All of those kinds of things go well beyond just the name or the photo that you might put on your CV. They’ll be threaded throughout all of the lines on your CV. They will be woven deep into its tapestry. It’s not as easy to simply remove the label and assume that the product will be judged fairly without it.

Sekoul: Thinking about instances where behavioral science might’ve been used to increase social justice or make it more likely, what do you think some of the failures of behavioral science might be? Some of the pitfalls?

Brooke: One of the great sales propositions for behavioral insights is that they’re supposed to be quick and dirty, and cheap. Those kinds of arguments are very attractive because everyone’s looking for inexpensive solutions. But the flip side of those arguments is that behavioral science offers solutions without challenging more deep-seated facets of the system. It’s been criticized for being neoliberal in this sense that, to go back to the analogy I was using before, all we ever change is the labeling. We never actually change the product.

That’s one of the ways that behavioral science has managed to sell itself, that just by changing the label, you can get dramatically better performance from your product. But that has allowed the product itself to remain unchanged. That has allowed the problematic features of the system to go unchallenged, all while people get to pat themselves on the back for thinking that they are actually moving forward on key files.

The unconscious bias training is a perfect example of that. For want of a better way to put it, people were sold on the idea that a training seminar would stop the proliferation of biases that perpetuate inequalities. Because of these kinds of workshops, people felt that they were doing something about these inequalities, when actually many of their actions were nearly inconsequential.

Sekoul: If you’re a policymaker looking to the behavioral sciences in order to design new policies within the limited budget, how would you start to use techniques from that field to increase social justice?

Brooke: I think the way that a lot of those conversations have started up until now is to say here’s a cheap, quick, and effective way that we can make gains on inequality. But we need to be more aggressive than that, I think. 

The Decision Lab openly acknowledges that social justice has to do with the distribution of benefits within society, and it is never going to be a painless exercise to recalibrate those distributions. It’s going to be hard and we have to stop trying to sell it as though it won’t be, because substituting hard choices with easy ones is rarely rewarding in the long run. 

Sekoul: How, then, is behavioral science relevant to effectively pursuing social justice?

Brooke: We cannot sell social inequality as something that is easy to fix through behavioral science. What we can do is pitch the effectiveness of behavioral science solutions. If we are going to be making hard choices about where to allocate resources and about how to recalibrate social distributions, we need to put ourselves in the best position for those hard choices to succeed. 

The decisions, regardless of who is choosing, are never easy; however, you want them to work. That is where behavioral science has its real value proposition. You’re going to be in a situation where you have very, very difficult choices to make. We can help increase the chances of success, even without offering to make it less painful.

Nudging Can Encourage Sustainable Food Choices


At TDL, our role is to translate science. This article is part of a series on cutting edge research that has the potential to create positive social impact. While the research is inherently specific, we believe that the insights gleaned from each piece in this series are relevant to behavioral science practitioners in many different fields. At TDL, we are always looking for ways to translate science into impact. If you would like to chat with us about a potential collaboration, feel free to contact us.


As a socially-conscious applied research firm, TDL is interested in connecting cutting-edge research with real-world applications. In particular, we’re interested in behavioral science interventions that create healthier societies. One way of achieving this goal is to modify choice architecture to nudge consumers in healthier, more sustainable directions.

To hear directly from someone working on these exact kinds of issues, we reached out to Dr. Jolien Vandenbroele, a postdoctoral researcher in the Department of Marketing, Innovation and Organisation at the University of Ghent. In this study, Dr. Vandenbroele and a team of researchers sought to uncover how modifications to choice architecture could impact consumers’ willingness to swap meat for non-meat substitutes.

A link to the full study is available here: Mock meat in the butchery: nudging consumers toward meat substitutes

A full version of some of Jolien’s other studies are available here:

Nudging to get our food choices on a sustainable track

Food-tastic choice!: nudging to get our food choices on a healthy and sustainable track

If you work it, flaunt it: conspicuous displays of exercise efforts increase mate value


Julian: How would you describe your research in a nutshell?

Jolien: How is it possible that we always end up with products in our basket that we did not plan to buy? Supermarkets appear to use smart techniques to steer our shopping behavior. But what if these techniques would not only be used to trick us into buying more candy, but also into more sustainable products? My research focuses on giving consumers a little ‘push’ towards more sustainable products in the supermarket by adapting the store layout. These interventions are called ‘nudges’ and are characterized by being cheap, easy-to-implement, and never restricting freedom of choice. Think about repositioning products on the shelf, so that the more sustainable ones are easier to reach and more visible than less sustainable products. 

Meat substitutes are products that look and taste like meat, but are completely plant-based, such as veggie burgers. As such, meat substitutes are mostly sold in a separate, vegetarian section in the supermarket. But is this actually the best position for the product to maximize sales? We found out that most of the people that buy meat substitutes are actually not vegetarians, but flexitarians! Flexitarians are people that do eat meat, but they are willing to skip meat once in a while and replace it with veggies. 

Julian: What did you do with this information?

Jolien: We hypothesized that there would be more meat substitutes sold when they were positioned next to the meat product that they are imitating, rather than when they are placed in a vegetarian-only section. This would firstly increase the visibility among flexitarians, frequent buyers of the product, as they mostly skip the vegetarian section. When the meat substitute is placed in the butchery, a section they visit more frequently, they will more easily notice the meat substitute. Second, we believe that placing them next to each other, in a pairwise presentation, will help shoppers to actually consider the veggie burger as an alternative for the meat product. By seeing them next to each other, the meat substitute will be taken more easily into the set of options customers are considering for their dinner: will we eat chicken, a burger, or… a veggie burger? So by increasing the visibility (nudge 1) and placing the products pairwise (nudge 2), we expected a boost in the sales of meat substitutes. 

Julian: What rough process did you follow?

Jolien: In the first study, we set up a field experiment in collaboration with a big European supermarket chain. In one store, we adapted the store-lay for one month so that the meat substitutes were placed next to the product that they were imitating in the butchery, for example, the vegetarian curry next to the chicken curry. During this month, we tracked sales of the meat substitutes and compared it to the sales of the month before. As an additional control, we also compared these numbers to the sales of meat substitutes in eight similar stores, where no interventions took place. In a second lab study, we created a mini-store where we manipulated the effect of visibility and the pairwise presentation independently by changing the set-up, to examine the individual effect of each nudge on product choice. For example, for some participants in the mini-store, the meat substitutes were highly visible, while they were less visible for other participants. 

Julian: What did you end up finding out?

Jolien: We found in our field experiment that more meat substitutes were sold (almost three times more) when they were placed next to the meat product in the butchery. Sales of meat substitutes were enhanced, relative to both past sales in the experimental store and sales in eight other control stores that serve as benchmarks. Interestingly, no backfire effect was observed, as meat product sales did not increase significantly. In the follow-up lab experiment, we found that both individual nudges, visibility and the pairwise presentation, have a positive effect on the increase of meat substitutes. 


The AI Governance Challenge

Julian: How do you think this is relevant to an applied setting?

Jolien: The production of meat, and especially red meat, is a process that heavily affects our environment, as it evokes high CO2-emissions. To preserve our natural resources in the future, we will have to adapt our food habits and opt for more sustainable food choices. However, old habits die hard, and this is no different for food. If retailers would offer meat substitutes next to the meat products they are imitating, more shoppers would find their way to these vegetarian products. Such a mixed assortment would nudge shoppers to purchase more sustainable food products. 

Julian: What do you think some exciting directions are for research stemming from your study? 

Jolien: Meat substitutes are known as a ‘first step’ towards a diet with more vegetarian products because the barrier to try them is quite low. This is because meat substitutes look familiar, which makes people more willing to try them compared to completely novel products. It would be interesting to explore how we can take the next step, and let people explore vegetarian products that do not have meat look-alikes, such as tofu and halloumi. This is quite a challenge, as these products are still mostly unknown to the big audience and so the willingness to try them is lower. 

How Effective Is Nudging?


At TDL, our role is to translate science. This article is part of a series on cutting edge research that has the potential to create positive social impact. While the research is inherently specific, we believe that the insights gleaned from each piece in this series are relevant to behavioral science practitioners in many different fields. At TDL, we are always looking for ways to translate science into impact. If you would like to chat with us about a potential collaboration, feel free to contact us.


The concept of nudging has recently grown in popularity. This is partially due to how exciting and innovative these types of interventions can be. But, what might be more important than their innovativeness and excitability is if they actually work. And if they do, which conditions are important for implementing nudges, and what can we learn from studying them on a large scale?

As an applied behavioral science research firm, The Decision Lab is interested in learning more about the effectiveness of nudges and how they can be better implemented to drive social change. To further this interest, we reached out to Dr. Dennis Hummel and Prof. Alexander Maedche to learn about their work on studying the effectiveness of nudges and their attempt at classifying them with the purpose of guiding future research.

A full version of some of Dennis and Alexander’s studies are available here:

Who can be nudged? Examining nudging effectiveness in the context of need for cognition and need for uniqueness

How effective is nudging? A quantitative review on the effect sizes and limits of empirical nudging studies

Designing adaptive nudges for multi-channel choices of digital services: A laboratory experiment design

Improving Digital Nudging Using Attentive User Interfaces: Theory Development and Experiment Design

Accumulation and Evolution of Design Knowledge in Design Science Research – A Journey Through Time and Space

How would you describe the focus of your research in simple terms?

Nudges are “any aspects of the choice architecture that alters people’s behavior in a predictable way without forbidding any options or significantly changing their economic incentives”.1 As nudges are very popular, they affect our everyday lives. For example, through governmental nudge units, such as the Behavioral Insights Team in the UK or the former Social and Behavioral Sciences Team in the US. Yet, it has never really been investigated on a large scale if nudges really work and, if so, under which conditions. One of the authors of the original nudging book has even dedicated a separate journal paper on “nudges that fail”.2 With our research,3 we aimed to estimate the effectiveness of nudging. Moreover, we wanted to design a classification system that can serve as a guide for future nudging studies.

How would you explain your research question to the general public? 

Well, we follow two main research questions. On the one hand, we want to judge whether the hype around nudging can be documented with scientific data. We ask ourselves by how much nudges decrease or increase an outcome compared with a control group that received no nudge. 

In addition, we want to know what would be the influencing factors of a divergent effectiveness. As nudging is a broad concept, some types of nudges or contexts could be more effective than others. On the other hand, we asked ourselves whether there is a way to classify all nudges studies into one comprehensive system, such as a taxonomy or a morphological box.

What did you think you’d find, and why?

As we followed an explorative approach, we did not formulate any explicit hypotheses. However, we of course expected that nudging would be highly effective. Based on previous literature reviews on nudging, we thought that in particular defaults would be among the most effective types of nudges. Moreover, we expected that nudges might differ by the context, for example, energy, the environment, etc. and by distinguishing between offline nudging and digital nudging which is a rather new concept introduced by Weinmann et al.4 Finally, we also hoped to find rather practical information such as in which countries the fewest nudging studies have been conducted or which types of nudges have been used rarely to offer avenues of future research to other researchers. As for the classification system, we were entirely curious and open as taxonomies or morphological boxes are developed along the process.

What sort of process did you follow?

We first conducted a systematic literature review. Literature reviews are performed about as follows: After defining a goal, a search strategy, keywords and databases, we ran a keyword combination in several academic databases. We had a broad set of keywords and found about 2,500 papers which then had to be screened based on the title, the keywords and the abstract. After the screening, we read 280 papers in full to distill the 100 relevant papers for our analysis (it was really a coincidence that it ended up on such a round number). These papers were then analyzed in detail extracting the type of the nudge, the effect size, the context and other relevant information. In the end, we created a database, which is available on request, with more than 300 different nudging treatments and extracted more than 20 characteristics for each treatment. To design the morphological box, we followed the recommendations from Nickerson et al.5

What did you end up finding out?

We found much more than we could ever present in one academic paper. First, our analysis revealed that only 62% of the nudging treatments are statistically significant which is much lower than we initially expected. Nudges have a median effect size of 21%, which depends on the type of nudge and the context. As expected, defaults are the most effective nudges while precommitment strategies (i.e. you commit now to do something in the future) are the least effective. Moreover, digital nudging is similarly effective as offline nudging but it offers new possibilities to individualize nudges. This means that digital nudges can be adapted more easily to the individual characteristics of the decision-makers (see a brand-new study for more information: Ingendahl et al., 2020). Finally, we developed a morphological box which categorizes empirical nudging studies along eight dimensions.


The AI Governance Challenge

How do you think this is relevant to an applied setting?  

When the paper was published, many people from business or civil society contacted us to learn more about the applications of the results. Managers often asked for the most effective nudge to increase sales which is of course not the purpose of nudging (but can then rather be classified as a form of manipulation). But for public goods, our results show for instance that changing the effort, reminders and feedback are effective nudges in a health context which might offer ways to fight the Covid-19 virus (i.e. by reminding people to wash hands or giving them feedback if they have washed their hands long enough).

What do you think some exciting directions are for research stemming from your study? 

Our study offers a variety of avenues for future research. First, we noticed that only a few studies used digital nudges. As more and more decisions are taken online, this is definitely an evolving area for future research. This is particularly true if you consider that programming digital environments is much easier than physically rearranging cafeteria lines or changing the default of organ donations. Also, more studies could be conducted in Africa, Asia, or Latin America, the latter being entirely ignored by the studies we found. Finally, certain types of nudges are under-researched, such as pre-commitment strategies or feedback nudges (the latter being very surprising to us as feedback mechanisms of all types are very common today).

Game Theory Can Explain Why You Should Wear A Mask

COVID-19 is no longer a battle against a virus. It is also a battle within society against the uncooperative. Tensions are rising as individuals take polarized stances against safety guidelines. While some are pressing others to socially distance and wear a mask, others are protesting that these are violations of their individual freedoms. 

Wearing a mask is one of the recommended methods for preventing the spread of the coronavirus, and there is strong scientific evidence to support this.1 Moreover, in contrast to social distancing and quarantining, wearing a mask is more frequently adopted, easier to follow, and is less restrictive. If more people had decided to always wear a mask, America would have done a far better job controlling the pandemic.

So why are people still electing to not wear masks? In a previous article, I explained the phenomenon of caution fatigue, which describes an individual’s tendency to stop complying with safety guidelines due to depleted motivation and energy.2 This article broke down the behavioral science underlying caution fatigue and provided actionable steps towards mitigating it. These insights can be applied to the decision to wear a mask.

For example, let’s take a look at psychological reactance as a contributor to caution fatigue: After constantly being instructed to wear a mask over time, individuals may purposefully not wear a mask to establish a sense of personal freedom, even if this means dismissing scientific evidence.

Besides caution fatigue and reactance, there are a variety of other reasons for why many consciously elect to not wear a mask. Some complain that they are uncomfortable (difficult to breathe in, hot, sweaty, glasses fog up, etc.). Some have politicized wearing a mask to be a “liberal” action. Others hold incorrect beliefs about the effectiveness of masks.

For example, some believe that masks only protect the mask-wearer, and therefore argue that individuals have the liberty to risk their own safety. Others believe that masks only protect others, and argue that they should not have to experience the discomforts of wearing a mask to protect others’health. Then, there are some that incorrectly believe that masks are not effective at all.3

In an attempt to convince members of society to wear a mask, many sources utilize science. However, this approach has continued to fail due to the misinformation pandemic, the freedom argument, and a general distrust in authority. Breaking down the decision to wear a mask from a game theory lens provides a rather novel perspective. By modeling assumptions that reflect people’s reasons to not wear a mask, we see that wearing a mask still results in the optimal outcome for society. 

What is the prisoner’s dilemma?

The prisoner’s dilemma is one of the most famous examples of game theory, which is the study of how two parties interact and determine strategies in competitive environments. 

In this game, two apprehended bank robbers (Brendan and Jackson) have been placed in separate rooms so that they cannot communicate with each other. Each has two options: to confess or to remain silent. If Brendan and Jackson remain silent, they each get a 2-year sentence. However, if Brendan confesses while Jackson remains silent, then Brendan will only get a 1-year sentence while Jackson receives an 8-year sentence. And vice versa. If both confess, both receive a 5-year sentence. The model below visually represents this game. 

Clearly, the optimal outcome is if both prisoners remain silent. However, there’s a paradox, as both Brendan and Jackson are inclined to confess. They may be motivated to receive the 1-year sentence. Or they both fear the result of getting 8 years if their partner confesses. As a result, both confess and receive 5 years — not the optimal outcome.

The prisoner’s dilemma demonstrates how choosing the best outcome for an individual may create a less than optimal outcome overall. And this is often true in the real world, where a lack of cooperation can lead to an inefficient outcome overall. 

Wearing a mask and the prisoner’s dilemma

Game theory can help model the decision to wear a mask. Interestingly, we can even model these decisions under different assumptions that reflect the reasons (albeit incorrect) for not wearing a mask. 

Let’s define the players of this game as two individuals who have to make the decision of whether they want to wear a mask or not when going out. Like any decision, this decision depends on how the individual evaluates his/her costs (discomfort, ‘loss of freedom’) and benefits (reduced risk of infection, social approval).5

Model 1: The assumption is that masks only protect others around you. 

Many believe that masks only protect others and that they offer no self-protection. If this were true, an individual would only receive the health benefits of masks if others around them wore one. This is a common misconception as it is closest to the current scientific understanding of how masks work (to be clear however, science has shown that masks also protect the wearer to a certain extent).

This becomes a paradox that resembles the prisoner’s dilemma. An individual with this belief may act in self-interest and not wear a mask to avoid the discomfort and still receive the health benefits. However, there are likely many others with this belief that may make the same self-interested decision. As a result of this non-cooperation, both players choose to not wear a mask and end up suffering. 

The model above visually represents this. The health benefit of the other individual wearing a mask was arbitrarily assigned a value of 10. The cost of wearing the mask was arbitrarily assigned a value of -2. While the most efficient outcome is to have both individuals wear a mask (8,8), the end outcome is actually (0,0). 

This model is very similar and equally works to represent the belief set that masks only work when both parties wear one.

Model 2: The assumption is that masks only protect those wearing the mask.

People with this belief set argue that they should have the liberty to risk their own health. For example, I came across one poster held by a protester that read:

“If you’re wearing a mask, why would you care if I’m not? Your mask works, right?”

Of course, this is not how the virus works — by electing to not wear a mask, you are also risking others’ lives. Nevertheless, let’s structure our model around this belief set to see if cooperation results in the most efficient outcome, regardless of this belief’s accuracy. 

In the above model, individuals who don’t want to wear masks clearly believe that the benefits of not wearing a mask (avoiding the discomfort and feeling ‘free’) outweigh the potential health consequences. Therefore, we need to readjust the values we assign. We can do this by negating the values so that the cost is greater than the benefit. For example, the benefit of wearing a mask can equal 2 and the cost of wearing a mask can equal -10.

Based on this, the optimal outcome for both individuals would be to not wear a mask. This is the outcome that provides the highest utility to both individuals separately and as a group. 


The AI Governance Challenge

However, translating this to the real world shows that this might not necessarily be the case.

While the immediate cost of not wearing a mask may be a negative impact on your health, the pandemic continues to surge when a large number of people choose this route. As a result, your favorite restaurant may remain take-out only. You’ll continually abide by the mandatory mask restrictions at stores and buildings. NFL football remains closed. Paradoxical, isn’t it?

It is important to recognize that there are other players besides just individuals such as public/private entities who make decisions based on the statistics of the pandemic. As a result, our costs and benefits are not limited to the short term, and we may be blindsided by present bias. This makes a strong case for why cooperating and wearing a mask might, in fact, be the highest utility outcome. 

Model 3: The assumption is that masks offer no protection, or that the virus is a fabricated hoax

Yes, some people still don’t believe that the coronavirus really exists, or that it is not dangerous. This results in the decision to not wear a mask. Similarly, those that believe that masks offer no protection whatsoever will also decide to not wear a mask. In such cases, the perceived benefit (mask’s effectiveness) of wearing a mask is 0, and the cost (discomfort) of wearing the mask is still -2.

If these incorrect beliefs happened to be true, then the optimal outcome for society would be to not wear a mask, right? Well, it’s not that simple. Masks are more than just protective gear — they are a signal of cooperation and responsibility. While some individuals may believe that masks are not protective, others still do. In fact, most Americans do believe that the pandemic is dangerous, regardless of whether it actually is or isn’t. So how does this affect society?

Let’s consider the economy. During the initial shutdowns, US consumer spending significantly decreased.  After re-opening the economy, spending increased again, but not quite to its original levels. Even a 10% decrease from the baseline spending levels can spell out an economic catastrophe characterized by high unemployment and recession. 

One contributing factor to why spending hasn’t returned back to normal is because many people are still concerned about the dangers of the virus. This may be due to the fact that people still see many others deciding not to wear a mask.

And thus, cooperation again results in an optimal outcome for society, even when we disregard any of the ‘controversial’ beliefs around the health dangers of the virus or the effectiveness of wearing a mask. Reopening the economy only works if everyone is convinced that it really is safe to go outside. By not wearing a mask, you send the opposite signal, thereby depressing the economy by persuading others to stay inside. 

Final Takeaways

Game theory shows us that, regardless of what an individual believes, it is in their own self-interest to wear a mask.

While the conflict surrounding wearing masks will persist, these insights shed light on a new perspective on the benefits of wearing masks during this time, even outside the realm of public health and science. Next time you encounter a family member, friend, co-worker, or even a stranger who is against wearing masks, consider explaining that their decision, although self-interested in the short-run, only hurts them in the long run.

Just like in the prisoner’s dilemma, cooperation results in the most efficient outcome. If we cooperate and wear masks, the pandemic will be better mitigated and we may finally find true freedom again.

Combining AI and Behavioral Science Responsibly

If you haven’t spent the last five years living under a rock, you’ve likely heard at least one way in which artificial intelligence (AI) is being applied to something important in your life. From determining the musical characteristics of a hit song for Grammy-nominated producers1 to training NASA’s Curiosity rover to better navigate its abstract Martian environment,2 AI is as useful as it is ubiquitous. Yet despite AI’s omnipresence, few truly understand what is going on under the hood of these complex algorithms — and, concerningly, few seem to care, even when it is directly impacting society. Take for example the United Kingdom, where one in three local councils are using AI to assist with public welfare decisions, ranging from deciding where kids go to school to investigating benefits claims for fraud.3

What is AI?

In simple terms, AI describes machines that are made to think and act human. Like us, AI machines can learn from their environments and take steps towards achieving their goals based on past experiences. Artificial intelligence was first coined as a term in 1956 by John McCarthy, a mathematics professor at Dartmouth College.4 McCarthy posited that every aspect of learning and other features of human intelligence can, in theory, be described so precisely that a machine can be made to mathematically simulate them.

Back in McCarthy’s era, AI was merely conjecture that was limited in scope to a series of brainstorming sessions by idealistic mathematicians. Now, it is undergoing a sort of renaissance due to massive advancements in computing power and the sheer amount of data at our fingertips.

While the post-human, dystopian depictions of advanced AI may seem far-fetched, one must keep in mind that AI, even in its current and relatively rudimentary form, is still a powerful tool that can be used to create tremendous good or bad for society. The stakes are even higher when behavioral science interventions make use of AI. Problematic outcomes can occur when the uses of these tools are obfuscated from the public under a shroud of technocracy — especially if AI machines develop the same biases as their human creators. There is evidence that this can occur, as researchers have even managed to deliberately implement cognitive biases into machine learning algorithms according to an article published in Nature in 2018.5

Machines that act like us

A term that is almost as much of a buzzword as AI is machine learning (ML), which is a subset of AI that describes systems that have the capability of learning automatically from experience, much like humans. ML is used extensively by social media platforms to predict the types of content that we are most likely to read, from the news articles that show up on our Facebook feeds to the videos that YouTube recommends to us. According to Facebook6, their use of ML is for “connecting people with the content and stories they care about most.”

Yet perhaps we only tend to care about the things that reinforce our beliefs. Analysis from McKinsey & Company argues that social media sites use ML algorithms to “[filter] news based on user preferences [and reinforce] natural confirmation bias in readers”.7 For social media giants, confirmation bias is a feature, not a bug.

Worldwide Google searches for machine learning
Source: Google Trends

Despite concerns of ML-generated feedback loops that create ideological echo chambers on social media sites8 — which might indeed be an axiom that is built upon an incomplete view of individuals’ media diets, according to research from the Oxford Internet Institute9 — these (and many other) applications of ML are not inherently negative. Much of the time, it can be beneficial for us to be connected with the people and content that we care about the most. However, problematic uses of ML can cause bad outcomes: If we program machines to optimize for results that conform to our normative views and goals, they might do just that. AI machines are only as intelligent, rational, thoughtful, and unbiased as their creators. And, as the field of behavioral economics tells us, human rationality has its limits.


The AI Governance Challenge

When AI is used for the wrong reasons

The existence of biases does not necessarily mean we should slow down or stop our use of AI. Rather, we need to be mindful as we proceed so AI doesn’t become a sort of enigmatic black box over which we ostensibly have little control. Artificial intelligence and machine learning are simply tools we have at our disposal; it is up to us to decide how to use them responsibly. Special attention is required when we use a tool as powerful as ML — one that, when partnered with behavioral science, has the potential to exacerbate the biases that impact our decision making on an unprecedented scale. Bad outcomes of this partnership could include a reinforcement of biases we have towards marginalized individuals, or myopia towards equitable progress in the name of calculated optimization. Mediocre outcomes could include the use of ML-infused behavioral science interventions to sell us more stuff we don’t need or to bureaucratize our choice environments in a web of tedium. These tools could also encourage pernicious rent-seeking by uninspired businesses, leading to stifled innovation and lower competition.

Targeted nudges

Does any good lie at the intersection of ML and behavioral science? With an asterisk that strongly cautions against bad or mediocre uses — or the act of carelessly labelling ML as a faultless panacea — the answer is yes. Behavioral science solutions that are augmented with ML can better predict what interventions will work most effectively and for whom. ML can also allow us to create personalized nudges to better scale over large, heterogeneous populations.10 These personalized nudges could do wonders for addressing qualms about the external validity of randomized controlled trials, a type of experiment that is commonly used in behavioral science to determine which interventions work and to what degree. Idealistic daydreaming isn’t necessary to think of the many different pressing policy problems that could benefit from precise nudges. From predicting which messages will be the most salient to specific individuals, to personalized health recommendations based on our unique genetic makeup, many policy areas exist as suitable candidates for these kinds of interventions.

Going forward

The benefits of using ML to improve behavioral science applications may indeed outweigh the risks of creating bad outcomes — and, perhaps more pervasively, mediocre ones. In order for us to get it right, behavioral science must play a role in identifying and correcting the harmful biases that impact both our decisions and the decisions of our intelligent machines. When using AI, we must remain faithful to a key tenet of behavioral science: Interventions should influence our behavior so we can make better decisions for ourselves, all without interfering with our freedom of choice. Like their creators, intelligent machines can be bias-prone and imperfect. It is crucial that we remain aware of this as the marriage between behavioral science and AI matures so we can use these tools purposefully and ethically.

Cognitive Science Can Improve Decision Making


This article is part of a series on cutting edge research that has the potential to create positive social impact. While the research is inherently specific, we believe that the insights gleaned from each piece in this series are relevant to behavioral science practitioners in many different fields. At TDL, we are always looking for ways to translate science into impact. If you would like to chat with us about a potential collaboration, feel free to contact us.


As a socially-conscious applied research firm, TDL is interested in connecting cutting-edge research with real-world applications. To further this interest, The Decision Lab reached out to Michał Klincewicz, an assistant professor in Tilburg University in the Department of Cognitive Science, to learn more about his work on using video games to explore moral cognition and stimulate moral insight, as well as his use of machine learning to spot conspiratorial online videos.

In his research, Professor Klincewicz combines insights from social epistemology, data science, computational linguistics, psychology, neuroscience, and philosophy to learn about what can make individuals better decision-makers. With the aforementioned disciplines, he combines a mix of empirical and theoretical thinking to create transformative technologies.

A full version of some of Michał’s studies are available here:


Julian: How would you describe the focus of your research?

Michał: Recently, I’ve been focusing on two things: First, on video games in which players are faced with moral dilemmas. These simulations are a great way to stimulate moral insight, develop moral sensitivity, and are a versatile environment that can help us understand the psychological mechanisms behind complex decisions under uncertainty. Second, I’ve been focusing on developing machine learning algorithms that can spot conspiratorial YouTube content. A variety of insights from social epistemology, data science, and computational linguistics can be used to make these algorithms perform better. There is also a pressing need to counter the spread of misinformation.

Julian: What was your research question, broadly speaking?

Michał: I want to know what makes individuals better decision-makers. To find out, I use insights from across the cognitive sciences: psychology, neuroscience, linguistics, and philosophy. This is a mix of empirical and theoretical work, where it isn’t always clear which discipline may turn out to be relevant. I then use this knowledge to design technologies that can facilitate decision-making or improve individuals in the long-term.

Julian: What insights did you think you’d find out from your research, and why?

Michał: Put plainly, I’m interested in finding ways to make people better. There has been a lot of discussion across disciplines about how the psychology we inherited from our ancestors has left us unprepared for rapid technological change and globalization. 

I look to identify which particular aspect of that inheritance is the main culprit and then find a way to either limit its impact or a way to counteract it with something else. There were some relatively good candidates to look at first: tribalism, biases, negative emotions, and general intelligence.

I thought one of these or a combination of them would be a good place to start and then that most of the work would be in designing an appropriate intervention to deal with it. Artificial intelligence techniques seemed like a very promising avenue at that point, given how well they do in classifying things and in finding patterns where none are immediately apparent. 

Once the main problematic psychological mechanism is identified, we can use artificial intelligence techniques to identify when it is active and design an intervention to deal with it.

Julian: What is your general research process?

Michał: I work with a number of researchers across disciplines that have similar research agendas. In short, I like to work with people that aim to understand how technology shapes individuals and their environment for better and for worse. Their work and community is an important source of inspiration and direction to my own work. 

I have also been fortunate to have dozens of talented students over the years. Together, we have designed and carried out controlled experiments, designed nudges and tested them, and developed methods for studying decision-making in video games. Overall, I would characterize my research process as being both vertical and horizontal collaboration. It is dialogue that is bound together by a common commitment to serious theory and science that serves the public good. 


The AI Governance Challenge

Julian: What sorts of insights did you end up discovering?

Michał: Perhaps unsurprisingly, I found out that there is no single problematic psychological mechanism or even a set of such mechanisms that can be the primary focus for an improvement intervention. There are many individual differences responsible for the way in which people make decisions. Things like age, experience, knowledge, and so on all interact with each other to yield an idiosyncratic style of decision-making. 

However, the way I got there yielded a number of useful new methods that I aim to apply in future work, including the aforementioned video games and infrared imaging of the face. The work on conspiracy videos has given us a number of promising and scalable methods for classifying conspiratorial content. I think the work is extremely promising and I hope it will soon result in a workable solution that can be taken in the wild.

Julian: How do you think this is relevant to an applied setting?

Michał: I am currently starting the supervision of a PhD project realized in Tilburg’s MindLabs, a collaborative initiative that investigates human minds and artificial minds, with the aim of developing a serious game that will attract, train, and retain key personnel in the logistics sector. The team from Tilburg University’s CSAI department will work with the Port of Rotterdam and other industry partners, but the project could have found a home in any setting where critical decisions are made by experts.

Julian: What do you think are some exciting directions for future research?

Michał: The most exciting direction for this research is to see it in the wild, outside of the academy, making a genuine difference in people’s lives. I believe that our work on decision-making and nudging can, with sufficient support, mitigate some of the damage caused by poor individual decisions. The work on classifiers for conspiratorial content has the potential to help control the spread of misinformation and counteract its negative impact on democracy and public health as well give us a new tool to combat radicalization online

Easing The Job Search During COVID-19

Currently, many workplaces across the world are at least partially closed due to the COVID-19 pandemic. According to figures from the International Labor Organization, a UN agency committed to advancing social and economic justice through international labor standards,5 These closures have caused an estimated 14% loss in working hours worldwide, something that is reflected in the forecasted 4.9% GDP reduction globally this year.14 As a consequence, millions of people are expected to move into extreme poverty.13

Employment affects the quality of life and development of the most vulnerable people, and consequently, all of society.12 Therefore, any country that wishes to promote prosperity and inclusion in a sustained manner over time must seek to offer jobs that meet the demand of the population, especially in the context of a fragile economy.12 Despite this, removing barriers to job opportunities, with an emphasis on disadvantaged sectors (women, young people, etc.), is no easy task.

In the face of this, behavioral economics has contributed to the identification of cognitive biases present during job searches. Perhaps the unemployed do not act rationally when trying to get work — despite the potential benefits they would have by doing so — due to systematic deviations that influence their decision-making.6

A team of researchers led by Linda Babcock, the head of the Department of Social and Decision Sciences at Carnegie Mellon University, pointed to the difficulty involved in this type of job search.  This difficulty may be even greater than what is assumed by mainstream economics due to two problems: the need for relevant and easily understandable information, and the willpower of job seekers.2

To provide a solution, there exist low-cost interventions that could reduce the time a person is unemployed. These interventions could serve as study opportunities for policymakers during a time in which a simpler job search process is highly desirable for society as a whole.

It stands to reason that every unemployed person should be actively seeking a new job. However, people often procrastinate and spend their time on other activities. And even if they have searched, have they done so effectively? As Babcock et al. pointed out, people tend to underestimate the benefits of conducting an adequate job search. 

So, a question arises: What factors influence the intensity of a job search?

According to recent studies, some factors might be biases related to the level of impatience (aka present bias), overconfidence, and a lack of willpower in individuals (procrastination).3

To address these biases, Abel et al. (2019) carried out an action plan with 1,100 unemployed South African youths, which was expected to reduce the intent-action gap, and thus lead to an increased search intensity. An action plan, derived from contributions of behavioral economics, consists of the simplification of a complex task or process into small concrete actions to directly incentivize the action8. In this study, the weekly goals to be met were explicitly related to the number of applications, identification of job opportunities, and the number of hours spent on job searches. 

The results show that job applicants who use action plans receive 24% more responses in their applications, as well as a 30% increase in job offers, compared to their peers who were not offered a plan to follow.

Regarding the probability of getting a job, the beneficiaries achieved an increase from 11.5% to 16.4%. The reason for these results lies in the improvement — in terms of quality and frequency — of the job search via the act of creating and following through with an action plan.

However, despite these positive and significant results, it is important to remember that the experiment focused on short-term behavioral changes, so doubt remains as to whether similar results could be ensured in the medium and long term.

For example, a valid question yet to be answered is whether job seekers would have persisted in their quest if they had experienced further failures in their results.3 Apart from that, the use of this type of intervention in labor policies can be recommended as a possible solution, given their ease of implementation and their ability to obtain results in a short amount of time.

Concise and easy information

Other problems linked to job search are the need for information on labor market conditions, the process of applying for a job, and the skills required in the positions of interest. All of these considerations must be shown in an attractive and simple way to the reader beyond addressing just the content of the roles.2 In this context, behavioral economics suggests the use of nudges to achieve desired behavioral changes.

Alongside a team of researchers, Steffen Altmann, an Associate Professor of Economics at the University of Copenhagen, considered the importance of providing effective information, as well as the usefulness of nudges. To do so, they conducted an experiment that aimed to reduce the time a person is unemployed.1 The intervention consisted of providing a brochure, with easy and concise information of interest related to the position. The participants comprised 54,000 German job seekers, who were divided into two groups where one group received the leaflet and the other did not.


The AI Governance Challenge

The leaflet provided was made up of four parts. The first one showed statistics of the labor market along with positive messages, such as “it is the ideal moment,” or “you will be successful in finding employment.” Highlighting positive aspects of the information provided suggests that the authors were cognizant of the framing effect and its ability to drastically alter behavior. On the other hand, the second and third sections explained the relationship between carrying out a job search during unemployment and quality of life variables (health, family life, etc.). 

Citing evidence of the benefits of job searching in the brochure is related to trying to reduce the availability heuristic, a behavioral concept that explains when we evaluate the probability of occurrence of an event based on familiar events we have in mind. For example, if we know that our neighbors have not found a job for more than a year, we might think that our luck will also be low. Finally, the fourth part mentions the options available when looking for a job, such as social networks and employment agencies.

The results were significant in groups of people at risk of long-term unemployment — those more limited by their education or work experience among other factors. Specifically, one year after the experiment, a 4% increase in employment and earnings was found in this group, compared to their peers. It is important to note the low cost involved in obtaining such an improvement, as less than € 1 was spent per brochure.

A final piece of evidence to support the use of concise and easy information comes from a study led by Monika Mühlböck of the University of Vienna, which used an informative intervention with the addition of a short survey to encourage reflection, known as a “nudge of reflection”.9 The authors reduced the time that participants spent employed through access to information and a reflection on the job search.

The study’s intervention aimed at 37,000 recently unemployed young people from Austria, who would be given a short video and a short email survey. Four groups received two nudges differently, finding positive and significant results in the group that received the survey first followed by the short video. The most positively affected group, who was characterized as having a low level of education, found an increased likelihood of gaining employment by 3.7%.

Concluding remarks

Job search is often a more complex issue than many assume. Despite how frequently such a search is performed there needs to be easy and concise information in the process. In this context, behavioral economics can offer analysis and behavioral change suggestions to help those who are looking for jobs.

Therefore, there is a lot of potential in the generalization of online job searches for the future, as many authors have noted.7 The ease of obtaining data in the way employers and applicants search allows the possibility of carrying out intervention designs that allow lessons from labor economics and behavioral economics to be incorporated into job searches. The way job offers are linked to an applicant’s skills could also be improved. Furthermore, measuring the amount of information that may cause cognitive overload in applicants could also enrich further studies.

In the post-pandemic world, the search for online employment — even in developing nations — can become more widespread and common than before. Allowing insights derived from studies regarding online job searches, as well as behavioral economics models, can promote a better understanding of the biases involved in the recruitment process. Doing so would certainly allow a greater range of action and could achieve greater effectiveness, something that will be vital for the coming years.