Strengthen Your Strategy with Cyber Scenarios

How prepared is your organization for a cybersecurity attack? The path ahead is one of converging crises: geopolitical tension, socioeconomic disparity, climate risks, technological acceleration, and global health challenges. Proactive moves can avert massive breaches. And we’ve seen that when breaches do happen, the majority of damage comes not from the attack itself—but from how effectively companies respond internally and communicate with regulators, shareholders and the media.

An often-overlooked component of preparation is scenario planning for upcoming challenges. Strategic foresight experts Sanjay Khanna and Alan Iny make the case to prioritize scenario planning in an uncertain futurescape in order to close preparedness gaps—creating an advantage for organizations who are ready to face uncertainty. The authors provide the steps businesses must take to develop scenario-informed, interdisciplinary strategies to overcome the behavioral biases that make it difficult to identify risk, prepare for it, and act decisively and effectively when those risks materialize.

This report, a joint effort from The Decision Lab and the Boston Consulting Group, focuses on how and why organizations can overcome the obstacles amid increasing risks, notably the rise of cyber risk. The report includes two detailed, research-driven cybersecurity scenarios for 2024, as well as the steps stakeholders need to take in order to prepare for such crises, from cyber-physical systems to behavioral dynamics. These scenarios are designed to help your organization build resilience against cyber shocks amidst converging crises.

Click here to download the report
Strengthen Your Strategy with Cyber Scenarios

And here are additional ways for you to get involved in the conversation:

  • Listen to Sanjay Khanna explore scenario development on the TDL podcast, and our special roundtable episode on cybersecurity scenarios with all four co-authors
  • Watch a cyber scenarios webinar to be hosted by the team in autumn 2021, engaging with senior executives in cybersecurity and their use of scenario planning
  • Sign up to receive updates about all this and more, directly in your inbox, by joining the 20,000 subscribers to our monthly newsletter
  • Reach out directly to Sanjay, Alan, Michael or Brooke with any questions or comments

This is Personal: The Do’s and Don’ts of Personalization in Tech

I have a rule about using Netflix in my household: We watch dark, scary shows on my partner’s account, and humorous sitcoms on mine.

I claim that this is to maintain an easy differentiation for the recommendation algorithm and so that, depending on our mood, we can pick the relevant account and start watching right away. But to be honest, the real reason I insist on this separation is that it makes me feel good about beating Netflix at its own game.

This way, as far as Netflix knows, I am a bright person with a sunny disposition, who only watches positive, uplifting comedies of 20-minute durations, hardly ever binge-watches, and will happily return to old favorites such as Modern Family and Friends every few months. My partner, on the other hand, is a dark personality who watches crime shows and thrillers (sometimes through the night), loves getting into the minds of psycho killers, and will consume anything that matches this description.

But who are we really? Well, I am not spilling the beans here and I definitely don’t intend to solve this mystery for Netflix. 

Me 1, Netflix 0. Or so I think.

But who else do I hide my true self from? My fitness app? My grocery shopping app? Amazon? Spotify? As more and more platforms go down the path of using data to personalize the customer experience, this cat-and-mouse game will only get more interesting.

Why does personalization work? What are its limits? How does psychology make an appearance in this complicated tech story? In this article, I’ll be breaking this down.

The complex world of personalization

Personalization refers to the use of historical data of a consumer to curate their experience on a platform, making it more customized. We see this everywhere—for instance, when you open an app and it starts by greeting you with your first name, shows you recommendations based on your past purchases, and convinces you to buy something by giving you a discount on exactly the thing you wanted. Or when you open a music app and there’s a playlist for the somber mood you’re currently in.

Most tech companies today rely heavily on personalization technology. And rightly so: it leads to more customer engagement and more revenues. The numbers speak for themselves:

  • 75% of content watched on Netflix are based on the platform’s recommendations.1
  • 50% of listening time on Spotify comes from personalized playlists created using such technologies.2
  • 70% of time spent endlessly scrolling through Youtube videos comes from intelligent recommendations.3
  • 35% of products bought on Amazon were recommended by the algorithm.4

And let’s face it: as much as I might try to hide my true self from Netflix, a personalized experience makes me feel good. As per an Accenture survey, a whopping 91% of consumers are more likely to shop with brands that recognize them, remember them, and provide relevant offers and recommendations.5 On top of that, 83% of consumers are willing to share their data to enable a personalized experience.

Image courtesy of The Marketoonist

You may be wondering: If users want personalization, then what’s the problem? The problem is that personalization is a bit like walking a tightrope. A very thin line separates the “good” kind of personalization from the creepy kind.

“I like it because it’s so similar to me” can easily become“I don’t like it because it’s eerily similar to me.”

“This is relevant to me and saves me time and effort”can easily become “The algorithm is stereotyping me and that’s not cool.”

This switch from good to bad is where user psychology comes in. Understanding the real reason why personalization works can help us understand why it does not work sometimes.

When does personalization really work?

If you ask a tech person about the science behind personalization algorithms, they will tell you something along these lines: Once you have sufficient historical data about consumers, you can create a model and find the features that best predict the user’s behavior. We finalize a model with high predictive power, use that to find similar consumers in our set, and aggregate their behaviors. All of this put together helps us predict a user’s behavior and show the right recommendations.

Well, they are right. But the only thing they missed is the user—the actual person. When does a user want something that’s personalized for them? Turns out, quite a few ducks have to arrange themselves in a row for a user to like what has been personalized for them. Here are just a few to get you started on this:

  1. Emotion match: Consumers operate in different emotional states, and this impacts their perception of the context. Emotions include psychological arousal (such as “peak” or extreme emotions like anger, worry, and awe), general mood valence (feeling happy or sad), and active thinking style (positive or negative).

    A study of New York Times headlines showed that content that evokes high-arousal positive emotions (e.g. awe) or negative emotions (e.g. anger or anxiety) gets shared the most, indicating a “match” with the reader.7 In other words, an algorithm will work best when it somehow matches the contextual emotional state of the customer.
  2. Attitude match: Consumers have different attitudes towards different things, which means it could also color how they make decisions. Types of attitudes could include a preference for facts vs a preference for emotions; moral attitudes, such as core principles and beliefs; political attitudes; and so on. An experimental study showed that emotional ads work well for those with a high need for affect, whereas cognitive ads (which share facts and information) worked well for individuals with a high need for facts.8

    Consider the McDonald’s example below. Both ads sell the same product, but have different appeals.

So an algorithm, while highly skilled at predicting what consumers will respond to best, might still need to take into consideration the consumer’s attitude towards receiving information from different categories.

  1. Goal match: Consumers approach decisions with different types of goal states, and they are looking for information that can help them achieve this goal. For example, a hedonic purchase (i.e. something you buy purely for pleasure) vs a utilitarian purchase (i.e. something you buy as a means to an end) have different goals.

    Similarly, approach goals (wanting to embrace the positives) vs avoidance goals (wanting to avoid the negatives) have different requirements. An experimental study showed that donation appeals for a library framed in terms of rewards worked well for approach-oriented people, while appeals framed in terms of losses worked well for avoidance-oriented people.9 An algorithm will have to keep this in mind when deciding how to show content to a user.
  1. Personality match: Many studies have shown that user psychographics are an important determinant of their behavior. Personality dimensions are measured on various scales. The most famous, the Big 5 or OCEAN personality model, is quite universal and has been adopted around the world. Spotify published a paper where they showed a clear correlation between song choices and different personality traits.10 Thus, personality traits are another thing that algorithms need to take into consideration.

As you can see, the right algorithm and the right data are just one part of the puzzle. Even if these fall into place, personalization still needs the other piece, i.e. the understanding of user psychology.

So, now that data, algorithms, and user psychology are in place, do we have a match in heaven?

The pitfalls: When does personalization fail?

Unfortunately, even after all this, personalization can fail. 

Image: The spectrum of personalization reactions

Let’s break down the failure into 2 parts.

Stage 1: Annoyance bumps

The annoyance bump is a slight bump in the journey that causes customers to question personalization. In this case, the user generally holds a positive view of personalization, but some experiences leave a sour taste in their mouth. Some of these include:

  1. Irrelevant personalization: When personalization segments a user into a category based on unrepresentative, one-off purchases. For example, I bought my partner a Playstation and now I’m getting ads for a bunch of video games.
  2. Insensitive personalization: When personalization does not take into account real-world context. For example, this past month, a photo-printing company sent out mass congratulatory “new baby” emails, not taking into account the number of women who might be going through miscarriages or fertility issues.11
  3. Unhelpful personalization: When, despite personalization, the cognitive load for the consumer does not go down. For example, people often complain about not being able to choose quickly on Netflix, despite the personalization.

Stage 2: The Creepiness Ditch

This term, coined by John Brendt, is an important twist in the personalization story.11 The creepiness ditch is the increasing discomfort people feel when a digital experience gets too personalized, but in a way that is disorienting or uncomfortable.

Image: Adapted from Personalization Mechanics by John Berndt

In the creepiness ditch lie serious offences, such as:

  1. Stereotyping: When messages target somebody based on a stigmatized or marginalized identity, personalization fails. In one study, when consumers believed they had received an ad for a weight loss program based on their size, they felt “unfairly judged” by the matched message.13
  2. Excessive Retargeting: When the same messages are shown repeatedly, it leads to reactance from consumers. 55% of consumers put off buying when they see such ads. When they see the ad 10 times, more than 30% of people report actually getting angry at the advertiser.14
  3. Privacy: When a message is too tailored and consumers become consciously aware of the targeting, the sense of feeling tricked can cause the match to backfire.

The creepiness ditch is important because when the customers fall into this ditch, they churn out. There are many stories of big tech companies faltering here. A few years back, Netflix was in a controversy when viewers objected to targeted posters showing a certain type of image on movie posters based on how the algorithm had identified them (including racialized identities such as “black”).15 Similarly, Amazon was called out for using algorithms that recommended anti-vaccine bestsellers and juices that purported (falsely) to cure cancer.16

Making sure personalization works

The full picture tells us that making personalization work the right way is beneficial for both users and companies. 



In order to make personalization really work, design, data, and algorithms need to ensure they are auditing themselves on 5 pillars:

  1. Control: Are we giving users enough control over the personalization? Do users know they can control personalization? Can the user decide what data they wish to share with us?
  2. Feedback: Are we letting users give us feedback on our personalization? Can they tell us when something seems irrelevant to them?
  3. Choice: Do users have a choice to opt in to personalization? Can they choose to not be a part of the system at all?
  4. Transparency: Are we sharing with users why they are seeing a certain personalization? Do users know how the algorithm works?
  5. Ethics: Are we independently assessing our personalization outcomes on ethics? Do we have scope to engage 3rd-party assessors for such audits?17

These are just some guidelines that can help companies be mindful of the pitfalls of personalization, and ensure they steer away from the bumps and the ditches. Like all things in life, some amount of control only makes the experience better for all stakeholders. 

Don’t get me wrong, I am still batting for personalization. Even as I type this, Spotify is playing for me “focused” music, which it knows makes me more productive. After finishing this article, I will go watch something on my happy Netflix account. Or maybe I will indulge myself with a thriller on my dark Netflix account. Or maybe, I will create a third account, and watch only documentaries, just to confuse the good folks at Netflix. It’s a fun game. They know it, I know it.

Tracing the Origins of the Anti-Vaccine Movement

The COVID-19 pandemic was accompanied by a surge in conspiracy theories that blunted global efforts to stop the spread of SARS-CoV-2. At the time of writing, more than three and a quarter million people have died of COVID-19.1 It’s likely that many of these deaths could have been prevented but for the proliferation of conspiracy theories that reduced the public’s trust in medical experts and government officials.

These conspiracy theories are driven by the unintentional and intentional dissemination of false information—misinformation and disinformation, respectively.2 Disinformation is particularly harmful as it is designed to damage social institutions.

In February 2020, the WHO identified the danger posed by the proliferation of misinformation and disinformation to combatting COVID-19.3 The Director-General of the WHO and Secretary-General of the UN both characterized this as an infodemic.4,5 One year later, empirical evidence indicates that existing strategies for mitigating this infodemic are inadequate.6 Policymakers must use these data to design effective strategies that counter the growing public health threat posed by novel SARS-CoV-2 variants. Central to this goal is identifying the target for these intervention strategies.

Vaccination efforts are impeded by misinformation and disinformation

In an article for TDL, Sanketh Andhavarapu identified vaccine hesitancy and the anti-vax movement as the greatest challenge to controlling and ending the COVID-19 pandemic. Vaccines provide the greatest measure of protection against the SARS-CoV-2 virus, and the risk of vaccine side effects is far less than the risk of complications or death from COVID-19. However, a large fraction of the U.S. population is reluctant to be vaccinated. This hesitancy stems from the spread of vaccine misinformation and disinformation, particularly through social media.7

The spread of anti-vax conspiracy theories must be stopped if vaccines are to achieve their full potential in ending the global pandemic. Behavioral interventions aimed at promoting vaccination campaigns must target online misinformation and disinformation.

The design of effective interventions requires an understanding of the origin of the modern anti-vax movement. This began nearly a quarter-century ago with the publication of a clinical study that reported an increased incidence of autism spectrum disorder (ASD) in individuals who were vaccinated for measles, mumps, and rubella (MMR). This case also serves to highlight lessons learned by the scientific and clinical research communities that will strengthen efforts to stop the dissemination and proliferation of anti-vax ideology.

Vaccines, autism, and the clinical study that changed the world

The anti-vax movement has existed since Edward Jenner established the use of vaccines to treat smallpox.8,9,10 Resistance towards vaccination was based upon grounds of civil liberty and religious objection to the injection of non-human substances. Although these concerns also lie at the heart of the modern anti-vax movement, public vaccine hesitancy was rooted in the class divide that existed in Victorian England, a poorly regulated medical community, and inadequate public education efforts.11

In contrast to Victorian times, modern vaccination efforts in the information era are bolstered by free public access to more knowledge than has ever been available at any other time in human history. Unfortunately, this has not dampened the deeply emotional and politicized opposition to vaccines that scientists, healthcare workers, and policymakers face today.12 This is due in large part to a failure of the scientific community to address the anti-vax movement when it was catalyzed by a 1998 publication in the peer-reviewed medical journal, The Lancet.13

This study reported the onset of regressive autism in 12 patients within two weeks of receiving a measles-mumps-rubella (MMR) vaccine. (The authors also linked vaccination with bowel disease, but this is not often mentioned by anti-vaxxers.)14 The possibility that vaccines could cause neurodevelopmental disorders in previously healthy individuals appropriately received great attention in the academic and public stakeholder communities. Its publication in The Lancet—one of the world’s most influential peer-reviewed clinical research journals—gave it immediate credibility. However, it was soon noted that the implication of a causal link between MMR vaccination and regressive autism in the work reported by Wakefield and colleagues was based on shoddy evidence.15

The publication of unsubstantiated claims linking vaccine use with autism was immediately criticized and followed by studies refuting the causal association between vaccines and developmental disorders.16,17 Investigations by Sunday Times journalist Brian Deer followed from this critique and culminated in a complaint to The Lancet editors of possible research misconduct committed by Wakefield and colleagues.18 The editors of The Lancet were presented with credible evidence of research misconduct and were ethically obliged to investigate this 2004 complaint.

An editor of The Lancet, Richard Horton, published a response stating that there was no basis for Deer’s allegations.19The Lancet also allowed Wakefield and senior co-authors to publish a mild correction of interpretation20 and to outright refute Deer’s allegations of misconduct without providing any evidence to support their position.21,22 In further contempt of scientific ethics, a complaint was filed against the investigative journalist.

Deer was not deterred. His investigations exposed extensive fraud committed by Wakefield and colleagues, including:

  • The selective exclusion of specific traits in patients that did not fit the article’s conclusions;
  • Failure to report that 5 of the 12 patients had been previously diagnosed with developmental abnormalities at the time of recruitment into the study;
  • The labeling of all 12 patients as “healthy,” when in reality all had pre-existing conditions that were relevant to the study;
  • Failure to disclose that patients were recruited to the study by an anti-vax organization; and
  • Failure to report that the study was initiated and funded by lawyers planning litigation against vaccine manufacturers, and that Wakefield received payment from this source.23,24,25

In 2010, increasing pressure led the editors of The Lancet to quietly issue a full retraction notice for the Wakefield article.26 Wakefield remains adamant that his work linking an MMR vaccine to the development of autism is based on ethical and replicable clinical research. The Wakefield case is now condemned by the academic community as one of the greatest frauds of the 20th century, as is best exemplified by the 2011 article published by the editors of the British Medical Journal aptly entitled “Wakefield’s article linking MMR vaccine and autism was fraudulent.”27

Unfortunately, the belated response by the Lancet editors to the Wakefield case did very little to undo the damage caused by the persistence of this work in the public record for 12 years.

The utility of scientific research depends upon public trust

The damage caused by the Wakefield article is evident in statistics on vaccination trends following his 1998 publication. In the U.K., MMR vaccination decreased from 92% in 1996 to 84% in 2002, and by 2003 decreased below the level necessary to prevent an outbreak of measles in London.28 Outbreaks of measles have been reported around the world and caused deaths that would likely have been prevented by vaccines.

The destructive consequences of the Wakefield fraud would have been mitigated or prevented if points of concern raised by a small number of researchers and the evidence brought forth by Deer in 2004 were acted upon ethically by the academic community. External oversight of the research community will serve to restore public trust in science and promote scientific progress by preventing fraud. Unfortunately, the need for this oversight was not met, and this left a void that was filled with misinformation and exploited by the purveyors of disinformation.

The persistent consequences of the Wakefield case emphasize several important lessons for scientists, physicians, and policymakers who face the daunting task of addressing vaccine hesitancy and denial during the COVID-19 pandemic.

First, the academic community failed public stakeholders. The work of investigative journalists—not scientists and physicians—exposed the fraudulent link between vaccines and autism. The failure of the academic community to respond ethically to Wakefield’s fraudulent research linking vaccines with autism was the catalyst for a movement that now promotes the erroneous belief that vaccines cause autism and that scientists cannot be trusted. Damage caused by this failure must be undone if efforts to reach minimum vaccination goals are to be achieved. Public trust in the work performed by scientists and physicians must be restored.

Second, the destructive consequences of the Wakefield fraud would have been mitigated or prevented if points of concern raised by a small number of scientists and the evidence brought forth by Deer in 2004 were acted upon ethically by the academic community. External oversight of the research community will serve to restore public trust in science and promote scientific progress by preventing fraud. Unfortunately, the need for this oversight was not met, and this left a void that was filled with misinformation and exploited by the purveyors of disinformation.

Third, the spread of misinformation must be stopped, and more should be done to detect and shut down disinformation campaigns. Physicians and scientists increasingly view the loss of public trust in scientific research as the greatest threat facing healthcare and social stability in the future.29 During the ongoing pandemic, non-authorities can say anything they want and are viewed as trustworthy sources by public stakeholders who are now unwilling to trust scientists, physicians, and policymakers. Scientists and physicians can earn back public trust, but only by reaching out to communicate in an accessible language.30 Similarly, policymakers can encourage this process by communicating research in a non-partisan manner.31

The key to ending the spread of misinformation, disinformation, and conspiracy theories is increased access to reliable information and scientific literacy among public stakeholders.7 The reason for this need and the means to achieving this end are one and the same: social media.

The weaponization of social media

Social media-driven disinformation campaigns are targeted towards specific nations and subpopulations in order to disrupt social stability by manipulating the behavior of the public. The COVID-19 pandemic—specifically, the speed with which the SARS-CoV-2 virus was able to spread around the world, and the scale of the devastation it wrought—has renewed fears that in the not-so-distant future, such techniques could represent a new frontier in biowarfare.32 The combined effect of disinformation campaigns and naturally occurring health pandemics has the potential to be as effective as biological weapons at destabilizing societies.

Now is the time to turn COVID-19 into an opportunity to develop effective behavioral interventions that counter disinformation campaigns targeting vulnerable populations. During future pandemics, this work could be essential for saving lives and avoiding an even greater disaster than what we’ve seen over the past year.

We must address the role of social media in perpetuating and exacerbating the damage caused by the Wakefield fraud to public perception of vaccine safety. Since the outbreak of COVID-19, the role of social media in disseminating misinformation and disinformation has been tracked and reported in peer-reviewed journals. This research identified behavioral interventions targeting social media use that can increase public confidence in vaccines and science.

This will be accomplished through the development of novel interventions and the improvement of existing strategies that improve cooperation between scientists and stakeholders.32 One solution is to invest in existing organizations to provide a social media platform that translates primary research into an accessible format.33 Public confidence in these organizations will be improved if they are independent of government influence.34

This strategy must be alert to changing trends in social media misinformation movements and disinformation campaigns, and be highly responsive to these changes by posting facts supported with valid primary source references.35,36 The means to fact-check disinformation in real time will likely be provided by machine learning technology.37

Success will follow from cooperation and vigilance

Disinformation campaigns exploit existing social divisions and disparities and often use hate speech to target specific audiences. Disinformation campaign managers can counter all efforts to expose their existing campaigns, and these efforts are useless if the public isn’t inclined to go to verified sources first and to examine sensational information in a neutral, dispassionate manner.

It all comes down to the public’s perception of a source’s credibility. This is the major challenge facing efforts to improve public awareness of the need to identify fake news, and to increase its willingness to go to verified, apolitical sources for their information. Improved trust and cooperation between public stakeholders and scientific, medical, and government officials will improve health outcomes and vaccination efforts. We can only hope that the ongoing devastation of COVID-19 and the inevitability of future pandemics will drive innovations that heal the infodemic.

The Behavioral Economics of Distracted Driving

As we continue to navigate the benefits of increased technology, we must also consider the costs: the increased risk to our safety on the roads. Distracted driving, predominantly due to cellphone use while driving, is the leading cause of fatalities on the road.1 A driver using their phone is five times more likely to crash than undistracted drivers. For comparison, driving under the influence of alcohol only doubles the chances of crashing.2,3

These numbers aren’t surprising or new. We’ve long known about the dangers of distracted driving, and the majority of individuals support laws against this behavior.4 Despite this sentiment, distracted driving behaviors remain common. 91% of young drivers (aged 15–19) reported texting while driving, and 40% of these texters even admitted to doing more complex tasks (like having texting arguments or sexting) while driving.

This dissonance isn’t unique to texting. Similar attitudes are held towards speeding: most people agree that speeding is unacceptable, yet admit to doing so themselves.5

Clearly, there is a suspension of concern and consideration for ourselves (and others) when we receive a text message on the road. Most people may tend to believe that distracted driving is an issue of morality (in that only bad people do it), but in fact, it is more likely a problem of cognitive bias than of values and morals. While we generally intend to “do the right thing,” we often neglect to do so in the moment. Behavioral economics aims to study these seemingly irrational behaviors in various contexts. The field offers several tools to explain distracted driving behavior and insights for solving the distracted driving problem.

Present bias and hyperbolic discounting: Why we do the things we don’t want to do

Present bias, the overemphasis of the present moment with neglect for future consequences, goes a long way in explaining why we reach for our phones while behind the wheel. Present bias takes many forms and explains choices related to drug use, overeating, smoking, and neglect for public health guidelines.6

Present bias leads us to engage in what economists refer to as “hyperbolic discounting,” which describes how we favor immediate payoffs over future ones.7 Choosing between receiving a dollar today or three dollars tomorrow is an easy decision. Yet, when asked to choose between a dollar today or three dollars a year from now, we are likely to choose the former, even though this decision is (from an economics perspective) irrational.6,8,9

Essentially, we overweight the present and underweight the future.10 We make time-inconsistent choices for the present moment that our future selves may not appreciate. In fact, we assign significantly greater weight to moments that are temporally close to us than any reasonable discount rate can explain.11

A group of researchers from Penn State University aimed to see if they could use other economic concepts to describe distracted driving behavior, with interesting takeaways:

  • Impulsivity largely explains texting and driving behaviors. Those who admitted to texting while driving discounted future monetary rewards at a greater rate in experimental settings.12,13
  • Distance is a critical determinant. When analyzing students in a hypothetical driving scenario, the researchers were able to show that the farther away the destination was, the lower the likelihood that participants would wait to reply to a text message.13
  • It matters whom they’re texting. In an experiment with choice sets, the most vital determinant of replying to a text was who sent the message. Participants were most likely to reply if the message was from a significant other. Other research confirms this view, finding that social distance determines texting-while-driving behaviors, in that we are more likely to reply to those to whom we are socially close.14,15
  • Road type influences our likelihood of texting. Drivers are less likely to read a text in “stop-and-go” city traffic, where it’s riskier to take your eyes off the road, compared to rural/highway driving. The perceived probability of crashing similarly explained texting-while-driving behavior in that the higher the probability of crashing, the lower the chances of replying to a text.15, 16

Reading over these findings, you may be thinking: “Yeah, obviously.” The important takeaway here is not that this line of research has turned up surprising results; rather, the point is that, across several studies, researchers have been able to show that texting-while-driving behaviors fit hyperbolic economic models, helping us predict who is prone to distracted driving and when. This means that behavioral interventions are a promising tool to prevent distracted driving, and potentially save lives.13,17,18

A note on legislation

Given how impulsivity and present bias encourage distracted driving behaviors, relying on legislation to implement fines and texting bans is a critical piece. Plus, we know that driving laws, for the most part, are successful in changing driving behaviors. Systematic reviews of seatbelt use rates and drunk driving show that legislation prohibiting these behaviors reduces related fatalities.19,20

The same is true when it comes to the issue of distracted driving: students in states with bans on phone use while driving reported less texting-while-driving behaviors than those in states without.21 However, changing distracted driving behavior may require more nuance than previous campaigns against other dangerous driving behaviors.

Recent metanalyses show that while distracted driving laws may, in some cases, change behavior, the evidence is mixed, and the true effectiveness of legislation on texting behavior is unclear. As put by Dr. Kit Delgado, physician and professor at the University of Pennsylvania, “You talk to any teenager in the country, and they’ve been beaten over the head that texting while driving is dangerous … but the decision to reach for that phone can be impulsive, it can be emotional, it can be subconscious and automatic.”22

The choice to text and drive is often a split-second decision, and is typically a matter of perceived urgency. As put by Atchley et al.: “While money loses value on the time span of weeks, information loses value within minutes, which may explain why behaviors like texting often occur in inappropriate situations and may seem like addictions.”23

From the BE toolkit: Effective interventions for distracted driving

Luckily, behavioral economics provides insight into practical interventions that help overcome present bias and hyperbolic discounting that may go beyond legislation and prove useful for distracted driving. These interventions include commitment devices, episodic future thinking, threat appeals, and feedback.

Commitment devices

A commitment device is a way to “lock oneself” into a plan of action, especially for behaviors that we know are good but don’t necessarily want to enact when the time arises. The classic example comes from Greek mythology, when Odysseus instructed his crew to fill their ears with beeswax and tie him to the ship to avoid being lured by the sirens. Examples in day-to-day life include scheduling workouts in advance with an exercise partner or setting a deadline to achieve a goal (perhaps with the added motivation of a monetary fine for missing said deadline).

These types of devices help people quit smoking, lose weight, and achieve other goals towards health. One study found that commitment devices increase the success rate of quitting smoking by 40%. We even see commitment devices in software, providing computer users with the option to schedule updates in the future instead of doing so right away.24,25

Commitment devices aren’t entirely non-existent in driving. For instance, some Apple products can be set to automatically enable “Do Not Disturb” mode when the user is driving. Yet, many are hesitant to opt into this feature out of fear of missing calls from important people.26

As in-vehicle technology advances, inspiration for more possible interventions may come from seatbelt nudges. Studies on seatbelt reminders and seatbelt interlocks (cars that won’t allow certain features until a seatbelt is secured) show impressive results for improving seatbelt behaviors.28

Episodic future thinking

Without making specific commitments, one can improve driving behavior simply by thinking about the future. Episodic future thinking (EFT) is an effective method to reduce hyperbolic discounting, the very bias that largely explains distracted driving behaviors.

In one study on smoking behavior, researchers asked participants to vividly picture future events which they were looking forward to. (The only caveat: the events couldn’t be related to smoking.) Results showed that this exercise decreased delay-discounting behavior—in other words, led people to place a higher value on future rewards—and also reduced the intensity of their cravings for cigarettes.29

Threat appeals

Threat appeals may also be a useful method to improve driving behavior. A threat appeal is a “message that tries to raise the threat of danger and harm and discourage risky behavior.” Importantly, these threat appeals don’t need to induce fear to be effective, but instead encourage “anticipated regret” (in this case, nudging people to think about injuring or even killing someone else due to distracted driving).

One study, which compared those who watched a 60-second video of a texting-while-driving car crash to those who watched a regular commercial, found that those in the “control” group (those who did not watch the car crash video) were 50 percent more likely to make an impulsive decision to text and drive.30,31

Feedback

When we use our cellphones and drive, not only does our driving performance diminish, but our awareness of our driving ability diminishes.32 Receiving feedback on our driving may help ameliorate this problem. A study on telematic notifications found that notifications describing drivers’ performance, and comparing it to their personal best, improved their driving. These nudges are even more effective when combined with social comparisons (e.g., a leaderboard).33,34

Moving forward: applying BE principles to the distracted-driving problem

Needless to say, technology has infiltrated every aspect of our lives—for better or worse. Distracted driving is a prevalent risk that deserves ample time and attention towards mitigation.

Research on distracted driving shows how this behavior is a complex issue that derives from impulsivity and present bias, requiring several approaches and interventions to solve. Legislators, car manufacturers, and developers alike should consider tools from behavioral economics in shaping laws and technologies to create the best solutions to tackle the issue of distracted driving.

In the meantime, here are a few takeaways we can use to improve our own driving behaviors:

  • Let go of the “now”: Remember that the decision to text and drive is similar to the decisions behind other impulsive behaviors. Present bias shows that we weigh these choices as more important in the moment, and we’re more likely to do so more when driving long distances or receiving messages from those who are important to us.
  • Commit to driving safer: Commitment devices are an excellent tool to improve driving behaviors. Both planning ahead and asking others to hold us accountable will set us up for success and make us less likely to reach for our phones behind the wheel.
  • Think about the future: Make a practice of thinking about yourself in the future, and what different scenarios might occur if you decide to send a text while driving.

Speaking the Truth: Accents, Credibility, and Implicit Bias

Life as an immigrant in the United States has not been particularly hard for me, as I have always lived in diverse cities like Los Angeles and Philadelphia. However, I certainly have been and continue to be a victim of discrimination. Now that I am graduating this year, I find myself worrying about whether recruiters will be less likely to invite me for an interview because of my name. “Shi Shi Li” sounds very foreign, and is nowhere near your typical American-sounding name.

My concern is not unfounded. A résuméaudit study conducted by Kang and colleagues found that Asian and Black applicants received more callbacks if they fully “whitened” their résumés, compared to those who did not.1 In this study, whitening a résumé means that the applicant changed their first name to be more American and changed their experience to be more race-neutral. For example, if an Asian applicant’s name is Lei Zhang, changing it to Luke Zhang would increase his chances of getting a callback.

Likewise, under the experience section, changing the club name “Aspiring Asian-American Business Leaders” to “Aspiring Business Leaders” would increase an Asian applicant’s chances of getting a callback. 

But simply scrubbing your CV of all references to race isn’t necessarily enough. My friend, who is also graduating this year, shares my concerns about getting a job, even though she has an American nickname which she puts on her résumé. She worries more about her accent, which she cannot so easily get rid of.

My friend is worried that her accent will leave a less-than-favorable impression on the recruiter during an interview. She is worried that the recruiter will have difficulty understanding her. She is worried that the recruiter will see her as an outsider and will not be able to mesh well with the employees at their company. Fundamentally, she is worried that she will be discriminated against because of her accent. 

Unconscious bias and accents

The threat of overt discrimination is discomfiting enough on its own—but another dimension that many people overlook is how accents can implicitly bias people, and how they affect a person’s perceived credibility. An experiment conducted by Lev-Ari and Keysar found that when non-native speech is difficult to understand, speakers are perceived as less credible.2 To arrive at this conclusion, the researchers had 30 native speakers of American English listen to 45 prerecorded statements about trivial things (e.g., “A giraffe can go without water longer than a camel can”) made by 15 native speakers, 15 speakers with a mild accent, and 15 speakers with a heavy accent.

Before the participants listened to the trivia statements (half true, half false), the researchers had them record five trivia statements themselves, supposedly for future participants. This was done in order to drive home the fact that speakers were merely reciting trivia statements provided by the researchers, not expressing their own knowledge. After listening to each trivia statement, participants were asked to rate its veracity on a 14cm line, with one pole labeled definitely false and the other labeled definitely true

Results showed that accented speech was rated as less credible than native speech. Interestingly, participants did not rate the veracity of mildly accented statements significantly differently from heavily accented statements.

These findings are concerning for non-native speakers applying for jobs, as well as non-native speakers who are already employed. If their recruiters, peers, supervisors, and/or clients do not find them credible, then it would be extremely difficult for them to reach their goals. 

Combatting discrimination

What can companies do to prevent their recruiters and employees from incorrectly evaluating the veracity of an applicant’s or employee’s statement because of their accent? According to the study’s second experiment, companies can simply warn their recruiters and employees that their credibility judgments may be influenced by the difficulty of processing accented speech.

The procedure for the second experiment was identical to the first experiment, except for two details. First, the researchers told the participants that “The experiment is about the effect of the difficulty of understanding speakers’ speech on the likelihood that their statements will be believed.” Second, the participants also rated the difficulty of understanding the speaker on a continuous scale ranging from Very easy to Very difficult

The results showed that the participants rated statements spoken by a mildly accented speaker as equally credible as statements spoken by a native speaker. However, the warning did not prevent the participants from judging statements spoken by heavily accented speakers as less credible than statements spoken by native and mildly accented speakers. In addition, the results also revealed that the harder it was to understand the speaker, the less credible participants rated the statements.

The U.S. Equal Employment Opportunity Commission states that “Generally, an employer may only base an employment decision on accent if effective oral communication in English is required to perform job duties and the individual’s foreign accent materially interferes with his or her ability to communicate orally in English.”3 Even if recruiters are aware of this law, they may subconsciously discriminate against an applicant based on her accent.

These results suggest that companies should hold periodic workshops—for example, once a year—informing their employees that they may subconsciously discriminate against applicants, and even their fellow coworkers, because of their accents. While these workshops may not be useful for preventing discrimination against applicants and employees with heavy accents, any steps taken towards reducing discrimination in the labor market and workplace is a move in the right direction. 

Lastly, if you believe you have been denied an opportunity because of your accent or any other form of discrimination (and you’re in America), you can file a charge of employment discrimination on the U.S. Equal Employment Opportunity Commission public portal. Likewise, if you believe someone you know has been denied an opportunity because of discrimination, please direct them to that website, or pass along any resources that may be helpful to them.