Behavioral Science and the Future of Agriculture

While sectors in different areas are diminishing, one industry that will continue to grow is agriculture. We all need to eat, and the demands on agriculture are increasing at a steady pace. A global population of 9.8 billion is predicted by 2050, and experts predict overall food demand and animal-based food demand to increase by more than 50 percent and 70 percent, respectively.1,2

Canada is specifically impacted by this demand, being one of the world’s largest agricultural exporters. The Canadian government has set its sights on growing its agri-food exports to at least $75 billion annually by 2025.3 While the demand for food increases, so does the need for sustainable practices, as agriculture and related land-use change generate one quarter of annual greenhouse gas (GHG) emissions.2

Solutions are being developed to help solve these problems. The agriculture technology industry, abbreviated as “agtech,” is becoming more populated and diverse. Experts believe agtech will become a $730 billion (USD) industry worldwide by 2023,17 and Canada is not behind in contributing to the startup sector. The Vancouver-based Terramera believes their pest control technology could reduce synthetic pesticides by 80% and increase crop yields by 20% globally.4 Decisive Farming, an agtech startup headquartered in Irricana, Alberta, offers a platform that streamlines farming processes and optimizes production. And Calgary’s Verge Ag uses land data and artificial intelligence to create specialized GPS paths that machinery can follow to work on behalf of farmers.5 These technologies, and others, have the potential to revolutionize the agriculture industry.

Despite the promise of agtech, the adoption of many of these new tools has been slow. Behavioral science might offer a solution to this problem, shining a light on the reasons why some technologies haven’t caught on and providing interventions to fix that.

Before talking about the role of behavioral science in agriculture, though, let’s take a look at two significant categories in agtech: Precision agriculture, and automation and artificial intelligence.

Precision agriculture

Precision agriculture uses remote sensors to analyze the needs of individuals crops. Because each crop and field is different, the technology allows for precise information to maximize yield per crop. With this technology, farmers can be selective in their resource use and only deploy resources like water and fertilizer in locations where they’re needed.3 Precision technology also improves dairy and livestock farming, with sensor systems that measure animal behavior and health, helping farmers’ decision-making.6

Overall, precision agriculture is an incredibly helpful tool for decision-making optimization. This technology has several key benefits: reducing resource costs for farmers, reducing the risk of over-fertilization, and improving sustainability.6,7

Artificial intelligence, automation, and agriculture

Technological advances allow for autonomous vehicles and equipment to be employed in fields 24/7, increasing productivity, reducing food waste, and protecting farmers’ safety. One example of AI technology in the agriculture industry is facial recognition software for cows, which provides farmers with the ability to track individual animals’ health in detailed ways. Other uses of AI include using data to decide where to apply which type of herbicide, and predicting upcoming weather patterns to help farmers in their decision-making.18

However, the uptake of these technologies is low.6 Researchers found that in Germany, only 10–30% of farmers use new tools such as these.19

Why aren’t farmers using agtech?

A regularly discussed setback is the cost of new technologies. For some farmers, the technology’s upfront costs are too high, given the potential risk of crop failure due to poor weather. The variability of crop prices year by year can also make farmers more careful with their capital investment.1,9

Additionally, many emerging technologies involve the use and integration of data across different products. Evan Fraser, a professor at the University of Guelph who researches farmer behavior, states that data interoperability, data governance, and cybersecurity are the most significant challenges for farmers adopting the technology. Ransomware hacks are threatening in any context, but could be devastating for farmers; Dr. Fraser provides the potential scenario of cyber hackers remotely taking control of a poultry farm ventilator, which would have drastic consequences for the farm.10

Both cost- and data-related factors are contributors to the slow adoption of agtech, with due cause. These challenges considerably impact adoption and will need to be solved through technological improvements to decrease the cost while improving security.

But behavioral science can also help speed up agtech’s adoption, and policy makers should start to consider its use. Luckily, research is increasing in this sector, showing effective behavioral interventions that improve technology adoption.

Understanding individual factors is key to improving agtech adoption

Factors like age, gender, experience, attitudes, and beliefs explain a lot about a farmer’s inclination to use new technology. In assessing precision agriculture technology adoption, researchers from HEC Montreal and the University of Buckingham found that factors including ease of use, knowledge of technology, and perceived usefulness all significantly explained the variance in farmers’ adoption of the technology.11 These results show that one should consider these individualized qualities in order to create effective communications about threats, and the recommended responses to them.12

Cognitive biases also get in the way of technology adoption in farming, including heuristics and cognitive dissonance.

One research group found that heuristics had a significant part to play in farming behavior. Heuristics, or rules of thumb, are cognitive shortcuts used to process information efficiently. For example, in Kenya, farmers use visual cues to identify high-quality dairy cows. By recognizing these rules, most of the time, farmers can make accurate decisions.

Yet, in other scenarios, these shortcuts prevent optimal farming decisions. For instance, farmers in Mozambique traditionally aim to plant their cotton by mid-December each year. However, the time of year does not actually dictate the optimal time to plant cotton. A better indicator is the saturation of the soil after rainfall. As Mozambique, like other parts of the world, is seeing increasingly unpredictable patterns of rainfall, the farmers’ “rule” was at risk of inaccuracy. The researchers helped farmers switch to a rain-based rule of planting cotton after the first large rainfall, consequently increasing their crop yield.13

Cognitive dissonance also contributes to a reluctance to accept new information or techniques. Cognitive dissonance is how our brains respond when presented with competing information to our original beliefs. While cognitive dissonance can result in positive behavior change, our minds often rationalize continuing with irrational behavior that aligns with our existing beliefs.

Take, for example, a Queensland peach grower, who believes to have accurate knowledge of a well-known pest, the fruit fly. In this case, the farmer believes that the pest is not a credible threat, and they know how the pest arrived and its movement around the farm. When the farmer’s neighbor begins installing fruit fly traps to reduce the risk of damage to their crops, the farmer faces a choice: ignore the neighbors and proceed with their existing beliefs, or adjust their beliefs and install traps.

Psychologists explain that individuals will actively avoid situations that conflict with their beliefs due to the cognitive strain caused by changing previously-held thoughts. It is much easier to do what you’ve always done and view the world how you’ve always seen it.14

Educational nudges and farmer behavior

Because personal beliefs and perceptions have such a great impact on technology adoption in agriculture, experts have recommended simply educating farmers about various risks, and how technology can be used to minimize them.

In central Tanzania, Seetha et al. (2017) used focus groups and farmer learning sessions to encourage farmers to seriously consider the problem of aflatoxin infection. Two years after educating farmers on the infection, farmer understanding of the negative effects increased from 19% to 82%, and the frequency of contamination with the disease decreased from 44% to 5.9%. Similarly, in Wisconsin, farmers who participated in an educational workshop on nutrient management plans changed their behavior significantly, using more nutrients and conducting soil testing more frequently.12

Evidently, education for farmers can significantly change behavior, but more important is how this education is delivered. Experts recommend considering personal factors, like age, in developing educational materials, focusing on demonstrating the technology’s value and ease of use, and ensuring farmers’ concerns are heard in educational sessions.

Farmer behavior, agtech, and social networks

Many experts have recently called for a more holistic approach in future interventions that accounts for farmers’ social spheres. Social networks are incredibly important in understanding and changing behavior. One study found that one of the most significant influences on a farmer’s utilization of a tool was their community: a farmer was more likely to try an innovation if a friend recommended it. The same situation was true even among agricultural experts, with one study finding that agronomists used a particular technological tool based on their colleagues’ recommendation.16

A different research project invited farmers to an event hosting other local farmers who presented humorous plays on farming safety topics. After a week, the researchers found that 67% were considering making safety changes, and 42% actually did.12 This study shows the impact that social norms and networks have on changing behavior. When trying to inspire beneficial behavior change, understanding an individual’s network and how they receive information is incredibly impactful and should be considered in all interventions moving forward.

Behavioral science and the future of agricultural technology

While the current behavioral literature has produced encouraging results on low-cost behavioral interventions, Rose et al. stated that several gaps exist in the research today. Going forward, we need:

  • More robust, long-term studies of farming behavioral change: There is still a lack of high-quality research specific to the agriculture industry, including the use of control populations in studies.
  • More work to understand personal traits and impacts on behavior: Rose and colleagues recommend better understanding how cognitive and emotional factors affect farming behavior.
  • Lack of knowledge on how targeted policy tools may work: While knowledge about individual behavior change is increasing, little knowledge exists about which policy tools are more likely to get results in different contexts.12

Overall, the world’s changing agricultural needs require new practices and the use of new technologies to keep up. While these technologies are increasingly developed, several factors influence farmers’ hesitancy to adopt them. However, behavioral insights and nudges are useful to help understand factors beyond cost and data security that influence adoption of agtech products. Research shows that personal factors, cognitive heuristics, education, and social norms shape how farmers perceive new technologies, and should inform low-cost interventions to encourage further adoption. While there is still a need for more research on proper policy and the use of nudges, what exists in the research today shows a promising future for improving farming practices around the world.

Cognitive Biases Stop Us From Donating Effectively

In 2018, Americans donated $410 billion to charity,1 roughly equal to Norway’s nominal GDP that same year.2 If American charitable giving were a company, it would be one of the top three in the world in terms of revenue.3 Yet despite the overwhelming altruism of many individuals globally, over a million people die each year from preventable disease.4

Many more struggle with health issues that can be solved at relatively low costs. Take, for example, trachoma, an infectious eye disease that over 20 million people actively suffer from, many of whom live in conditions of extreme poverty.5 If left untreated, trachoma can cause an individual’s eyelids to turn inward, eventually resulting in blindness. The cost of treating trachoma with surgery has been estimated to be as low as $7.14,6 yet many still do not have access to this potentially life-changing procedure. These facts and figures provide a foundation for some key questions: Where is all of this money going, how much is it helping, and how can better outcomes be achieved? 

One reason why the donations described above do not necessarily achieve meaningful progress on these important outcomes, such as alleviating the suffering caused by trachoma, is because cognitive biases drive many of our donation decisions. These rationality-inhibiting biases can lead to a misallocation of resources, hereafter referred to as ‘ineffective altruism’. For the purpose of this article, ineffective altruism occurs when an individual has a given preference for a cause — such as curing blindness, protecting animal welfare, or improving access to education — and does not direct their donations to a charity that is highly effective at solving it.

Tremendous social good can be created if the biases that influence ineffective altruism are identified and corrected through better choice architecture where appropriate. In particular, two cognitive biases are important drivers of ineffective altruism: distance bias and the identifiable victim effect.

Distance bias

Distance bias describes the cognitive bias that causes individuals to place greater importance on things that are closer to them, both physically and temporally, even when further things might be equally or more important.7 It is why an earthquake in California might elicit a more emotional response from Canadians than an equally disastrous earthquake in Chile. In the historical literature, Peter Singer alluded to distance bias in his famous essay titled Famine, Affluence, and Morality.8 Singer poignantly argued that “it makes no moral difference whether the person I can help is a neighbor’s child ten yards away from me or a Bengali whose name I shall never know, ten thousand miles away.” 

If donation recipients are chosen based on proximity instead of necessity, resources may not reach those who need them the most. Global charitable giving is predominantly fuelled by rich countries,9 hence why a problematic misallocation of resources can occur when distance bias influences where donations go. If donors in rich countries only donate funds to those who live in their local communities or cities, then deserving recipients from developing countries who would stand to benefit the most will not receive the help they need. The tax-deductible nature of donations magnifies this further; compared to low-income individuals, ultra-wealthy donors (who have high marginal income tax rates) have a comparatively high incentive to donate due to their lower marginal cost of giving.10

There is empirical evidence that donors generally prefer those nearby to them, as only 31% of donors worldwide choose to donate to charitable organizations located outside of their respective countries of residence, according to survey data.11 Furthermore, experimental evidence shows that as the psychological distance between prospective donors and recipients increases, donors are less willing to provide help, but only when the recipient’s identity remains unknown.12

Humans versus Econs

These biases occur because altruists are Humans, not Econs. As described by Cass Sunstein and Richard Thaler in their book Nudge, an Econ is a theoretical type of rational economic decision-maker who makes unbiased forecasts and optimizes given their choices.13 By contrast, Humans suffer from innate cognitive biases that limit rationality,14 especially when faced with ambiguous or probabilistic choices.15 These biases sometimes cause us to overeat, fail to save, and indulge in vices such as cigarettes or alcohol — activities that an Econ would tend to avoid given the costs involved. While an altruistic Econ would calculate which individuals can be helped the most with each dollar and donate accordingly, a Human might instead choose to donate to charities that are more emotionally striking or familiar to them, even if those charities are not particularly effective or evidence-based.

The identifiable victim effect

The second cognitive bias that affects charitable decisions is the identifiable victim effect. This bias describes an individuals’ inclination to be more charitable to a specific, identifiable victim compared to a larger and more ambiguous group with an equal or greater need for help.16 It is why a television advertisement that shows John, a 5-year old boy with a rare disease, might be significantly more effective at soliciting donations than a similar advertisement that mentions the millions of children who die annually due to insufficient access to clean drinking water.17 The identifiable victim effect is candidly exemplified by a quote that is commonly attributed to Joseph Stalin: “A single death is a tragedy. A million deaths is a statistic.”

There is empirical evidence that confirms the influence of the identifiable victim effect on donation behavior as seen in Small, Loewenstein, and Slovic’s study from 2007.18 The study’s participants were each given $5 for completing a short survey, as well as one of three charity request letters. Each of these letters either contained the story of Rokia, a starving girl from Mali; a statistic on how many children are dying from starvation in Mali; or the description of Rokia in addition to the statistical information. Of the three conditions, the letter that featured Rokia’s story alone received the highest average donation.

Most charities are aware of the identifiable victim effect and often feature prominent stories of needy individuals in advertisements. Problems may arise when charities do not receive donations based on their potential to improve lives — the principal concern for Econs when they decide where to donate — but rather by how good they are at leveraging emotions and crafting stories that Humans find captivating. Since charities are often incentivized to raise donations by winning over donors’ hearts, effective charities that do not leverage the identifiable victim effect may do so to the benefit of ineffective ones that do.

Biases and heuristics are hardwired into our brains,19 hence why making individuals aware of their cognitive biases can sometimes be an ineffective strategy for improving decision making.20 However, charitable organizations can play a prominent role in correcting these biases through the use of properly designed choice architecture, a term that describes the context in which choices are presented to decision-makers.

Solutions for the identifiable victim effect

For the identifiable victim effect, I argue that effective charities should not necessarily try to correct it, but instead leverage it. Researchers have found evidence that an essential driver of the identifiable victim effect is that a higher proportion of those identified can be saved compared to statistical groups.16 The researchers noted that “When victims are identified it is clear exactly how many people will die, but when victims are statistical it is always possible that more or fewer will die […] Subjects felt avoiding certain fatalities was more important than avoiding uncertain fatalities.”

This ambiguity can potentially be ameliorated by communicating expected values, thereby showing donors that they can indeed make a tangible difference. If, for example, a donation has a 50% chance of saving ten lives, it may be better to communicate that it would save five lives instead of framing it as having the potential — but not the certainty — of saving ten. An even more effective solution would be to harness loss aversion, such that the donation is framed as preventing five deaths versus saving five lives.21 Ultimately, these are empirical questions that could benefit from further research that uses randomized controlled trials to test which communications are the most effective under specific circumstances.

Solutions for distance bias

A possible way of correcting distance bias could be by reducing the psychological distance between donors and recipients when soliciting donations. Researchers found that framing climate change communications to reduce psychological distance increased mitigation intentions significantly.22 Showing videos of the hypothetical impacts of climate change on the local surroundings of the participants in a treatment group increased willingness to take action to a greater degree than the control group, who instead watched the impacts of climate change on other countries. Similarities between climate change and altruistic decisions — namely, uncertainty regarding the impacts of individual actions, the need to sacrifice in the present, and geographical distance between actors — could serve as a useful analog for identifying strategies to improve the effectiveness of altruistic decisions. 

Based on the aforementioned literature, effective charities could potentially combat distance bias by communicating the evidence-based impacts of their work through referential terms that donors can easily interpret. In New York, saying “Every 10 days, enough people die from malaria to fill Madison Square Garden” might be more salient to donors than “Every 45 seconds, a child dies somewhere in the world from malaria”, thereby reducing distance bias.23 Another possibility could be to attach an identity to the statistics: “Every 45 seconds, a child similar to Rokia dies in Mali.” Improved decision making can also occur from presenting impacts in terms that donors actually care about: the number of lives saved, the number of people cured of an ailment, or the increase in recipients’ income from an intervention, for example.

Conclusion

Distance bias and the identifiable victim effect can interfere with our ability to make rational altruistic decisions, necessitating effective remediation from charities in the form of well-designed choice architecture. Effective charities should aim to create better solutions that take into account common biases, ultimately resulting in a better alignment of an individual’s desire to help a cause with the behavior necessary to do so. To inform these solutions (and to ensure greater external validity), empirical research that goes beyond experimental environments is needed. Charitable decisions — and the recipients from around the world who stand to gain the most from them — deserve an evidence-based approach that can translate to real-world impact. Behavioral science insights can create transformative social good by better understanding what fuels these decisions for individuals, organizations, and governments alike.

Routine in Quarantine: The Upside of a Pandemic

If you’ve ever taken an Econ 101 class, you’re familiar with the concept of opportunity cost. It’s the idea that, when making decisions, you should always consider what else you could be doing. Essentially, opportunity cost is about the value of time, captured by the phrase time is money.

Of course—as it originates in the world of economics—opportunity cost doesn’t touch on concepts like memory or perception. But what if our goal was to maximize the length of our psychological lives rather than the size of our bank account? In that case, when choosing between experiences, we would have to consider the memories that each experience would beget. A slogan for this kind of experiential opportunity cost could be time is memory.

In a previous article, I discussed some of the psychological quirks that warp our perception of time. One of the takeaways was that novel experiences expand our perception of time, whereas routine ones contract it. We can choose to break out of our routine every now and again, and take that vacation to Venice. By doing so, we accrue new memories, and our psychological history lengthens. From this perspective, a life spent in the doldrums of routine seems downright wasteful. Still, since most of us value routines, habits, rituals, and skill-building, something seems to be missing from this picture. We can’t shake off the niggling suspicion that psychological time maximization—in the form of memory-collecting—shouldn’t be our single, overarching goal.

Memories at the cost of experiences

One way to assess the value of memory-collecting is to test its impact on our two selves: the experiencing-self (which evaluates our moment-to-moment experiences) and the remembering-self (which evaluates our memories of those experiences).1 At first glance, novel experiences seem to perform pretty well—a moonlit stroll through Venice’s cobbled alleys is pleasurable in the moment as well as in retrospect. A closer examination, however, reveals that our novel experiences often favor the remembering-self at the cost of the experiencing-self.

Daniel Kahneman’s famous thought experiment cleverly illustrates this trade-off.2 Imagine you are planning a week-long vacation and have to pick between two options: either you sacrifice rest and comfort and go backpacking across Europe, or you catch up on sleep and sanity with a relaxing week of margaritas by Venice Beach. Here is the catch: afterwards you will take a pill that will wipe every second of the vacation from your memory. No mental or physical photographs would remain, only the experiences themselves at the time you experienced them. So, which vacation would you take?

While your answers may vary, the amnesia pill almost certainly plays a pivotal role in your decision process. This is because the vacation that offers the most robust and durable memories is not the same vacation that provides the most relaxing and rejuvenating experiences. In other words, the satisfaction of one self is purchased with the dissatisfaction of the other. And when it comes to vacation-planning in particular and decision-making in general, we usually err on the side of pleasing the remembering-self. We don’t put much weight on the sleep deprivation, achy feet, or travel anxiety the experiencing-self will endure. Rather, we imagine our future cosmopolitan selves, who will forever bask in the afterglow of our globetrotting escapades.

This built-in trade-off, between pleasing ourselves in the moment and satisfying ourselves upon recollection, forces us to pick between three options.

Option 1: We can claim that one of the selves is the “real” self, and only that self should matter. Kahneman initially theorized that the experiencing-self was our true self, and chalked up people’s emphasis on the remembering-self to good old human error. Of course, one could just as easily make the case for the supremacy of the remembering-self, in which case memory-collecting would be a valid goal, regardless of the initial experience.

Option 2: We can accept the futility of this line of analysis and abandon the project altogether. This is the option that Kahneman ultimately settled on, after he concluded that people’s incorrigible fascination with the remembering-self rendered option one untenable.

Option 3: We can attempt to balance the satisfaction of the selves, in which case the objective of memory-collecting must be tempered with other competing, and sometimes incompatible, objectives. As you may have guessed, I’m a fan of this last option. Unfortunately, this bring us back to square one, because routine—the ultimate psychological time-shrinker—is chief among these competing objectives.

Why practice if experience doesn’t change?

So how can it be worthwhile to spend large chunks of our lives in the grip of unmemorable routines? Neither the remembering-self nor the experiencing-self seems to be all that fond of dull repetition. In fact, one of the conditions of deliberate practice—the kind of practice needed to achieve expertise—is that it is uncomfortable.3 To top it off, it isn’t clear that the improvement that comes with routine practice even has a positive impact on our experience. Consider the following scenario:

Quarantine is in full steam and you can feel yourself morphing into a couch potato. Finally, you put your foot down. Enough is enough. You resolve to begin a follow-along yoga routine at home. But soon you discover that your motivation to stick to your self-imposed workout schedule pales in comparison with your desire to break the world record for consecutive hours binging Netflix.

A friend floats a great idea—why don’t you play a sport? Tennis was your go-to sport as a kid, so you decide to give it a try. Luckily your friend lives nearby, your work schedules are similar, and the weather is at its best. At first, the tennis is a blast. Admittedly, neither of you is remotely consistent or in the vicinity of graceful, but it feels great to get outdoors, soak in the sun, and hang out. Plus, the challenge of simply getting the ball over the net is so demanding that you are both fully locked into your rallies. Exercise has never been so fun and easy.

As time goes on, you and your friend begin to take private lessons. Eventually your high pops and accidental slices evolve into topspin lobs and perfectly placed drop shots. Your matches with your friend are just as competitive as before, but the joint quality of your tennis has improved drastically. Still, after one of your tightest matches—one in which you double faulted match point in a third-set tiebreak—you can’t help but be seized by a bout of frustration. All of that practice and for what!

Later that night an odd realization hits you—even though your tennis game has evolved and your matches have acquired a new intensity, the enjoyment you feel while playing has remained relatively stagnant. If anything, it may have taken a nosedive. After all, the same accidental slices that you used to laugh off now set off a stream of four-letter words.

Going with the flow

Mihaly Csikszentmihalyi charts this phenomenon in one of the most useful graphs I have ever encountered (see below).4 If you take a look at the graph, you’ll see that the y-axis represents the challenge of an activity and the x-axis represents the skill level of the performer. The flow channel is the sweet spot where the skill level just meets the challenge level. As Csikszentmihalyi puts it: “enjoyment appears at the boundary between boredom and anxiety, when the challenges are just balanced with the person’s capacity to act.”

Source: Bailey, C. (2013, September 12). How to ‘Flow’: Here’s the most magical chart you’ll come across today. A Life of Productivity. https://alifeofproductivity.com/how-to-experience-flow-magical-chart/

When you and your friend were tennis novices, you were in position 1, parked comfortably at the foot of the flow channel. The two of you were at similar skill levels, so the challenge was always just enough for you to handle, keeping you fully engaged. Because the two of you trained in tandem, you skirted the banks of anxiety and boredom and sailed straight up the flow channel together, making it to position 4 in just a few months. The counterintuitive insight here is that, even though your respective skills have drastically improved, position 4 delivers similar levels of enjoyment to position 1. This begs the question: why practice at all? Why seek out position 4 when you are just as happy as you were at position 1?

While there are many possible answers to this question, the most glaring one shouldn’t be overlooked: we really like being good at stuff. As Harry Harlow showed with rhesus macaques and Edward Deci with graduate students, we enjoy tasks and activities that offer no external reward, because the intrinsic rewards of improvement and achievement are plenty in and of themselves.5,6 Humans crave competence, especially when it comes to the tasks and activities that their identities and self-narratives are built around.7 Often, we aren’t satisfied with just hitting the ball around; we want to be excellent tennis players. And the only way to become excellent at anything is to build up to it incrementally. As the adage goes, Rome wasn’t built in a day. We must grit our teeth and endure hours and hours of frequently unmemorable and frustrating, yet ultimately rewarding practice. Real mastery is the progeny of routine.

The silver lining of a pandemic

While quarantine puts a damper on most potential activities, routines get away largely unscathed. In the previous article, I emphasized the importance of novel experiences and changes of scenery to counteract the warping of time that goes hand-in-hand with quarantine. I stand by this sagely advice.

But there is still the question of what to do with your remaining downtime, all that time that used to be spent hanging out with friends, but is currently being squandered on Friends reruns. The truth is that there has never been a better time to practice those scales, start that workout, or perfect that swing. And although the day-to-day details of your routines may not last in your memories, they will change the character of your future experiences, and will therefore endure in the memories of those experiences. In the long run, you’ll remember those heated matches with your friend fondly. When your kids get their first rackets, you’ll be able to teach them how to hit with topspin yourself. Wimbledon matches will become thrilling to watch, as you finally appreciate the beauty of Federer’s stroke. You’ll be in considerably better shape without having suffered through a minute of an interminable workout. And for all these benefits, you’ll have routine to thank.

How Dogmatism Leaves Us Less Informed

Remember that rumor on your newsfeed about masks making a COVID infection more likely? The one you are not quite sure about. It sounded remotely reasonable—but should you share it? Or do you go and check a reliable second source first?

Such choices are ubiquitous: Do we trust an intuitive judgment or do we seek more information?

In a new paper published in PNAS, my colleagues Max Rollwage, Ray Dolan, Steve Fleming and I explored how we deal with such information-seeking decisions. We were particularly interested in how these choices differ across people who are more or less dogmatic. Dogmatic people believe that their worldview reflects an absolute truth, which often stifles debates and drives us apart.

However, it’s unclear what sort of cognitive processes drive this outlook on life. We believe that understanding how dogmatic people search for information would be a good starting point.

People who think dogmatically often appear uninterested in novel information that could change their mind. One reason for this is what’s known as motivated search. In other words, more dogmatic people might be particularly enamored with their opinions: Why hear what the other candidate has to say when my own view is better anyway?

We all share this bias and it may be inflated in more dogmatic people. However, there’s a catch: Motivated search is tied to our specific group membership or opinion. If you’re Republican, your bias is likely red; if you’re a Democrat, your bias is likely blue. This made us wonder: Is it the dogmatic individual’s specific opinions that make them seek less information? Or is their lowered search driven by something that transcends particular views?

Odds you double-check: Dogmatism and information-seeking

To answer these questions, we asked over 700 US adults to play a simple computer game that was completely unrelated to their personal values: They saw two black boxes and had to decide which contained more flickering dots (imagine comparing two old TVs without a signal). They would be paid for a correct decision, so they had an incentive to choose carefully. 

But the important part came before they gave us their final judgment. Our participants could choose between two options. They could either decide that the first set of boxes was enough to make this final choice; or, they could pay a small fee to see another, clearer display of dots, which would help them make a more informed decision.

We borrowed this set-up from cognitive neuroscience. In that field, researchers have long used simple tasks to get at people’s basic thought processes. Although they might seem simple, such experiments mirror everyday information-seeking scenarios; luckily for us however, the tasks don’t carry their political baggage.

After the task, participants filled out several questionnaires. They told us about their political preferences and how strongly they believed in their worldview. The latter allowed us to measure dogmatism. We found that both extremes of the political spectrum tended to be more dogmatic than those in the political center, although dogmatism was slightly higher on the conservative end of the spectrum.

In our dot task, more dogmatic participants made as many mistakes and were as confident as their less dogmatic peers. That meant we could be sure that they didn’t hold these simple judgments dearer than their peers, as they might with their partisanship.

However, we found a striking difference when we looked at how often they purchased the additional information: More dogmatic participants were less willing to ask for helpful information. This reluctance to seek information also didn’t pay off: Their reduced search led more dogmatic thinkers to make less accurate judgments. In the end, they lost money.

The difference between more and less dogmatic people was particularly strong when the participants had little confidence in their initial decisions. In other words, dogmatists were happier to refuse educative information—especially when they weren’t quite sure if their initial judgment was correct.

Our findings are especially concerning today: What we read and hear is, more than ever, in our own hands. At the same time, unfiltered tweets and posts are often our first contact with news stories—not a carefully vetted report. So, even if there is a correction published somewhere, we might never read it unless we care to look for it. Our study suggests that dogmatism predisposes some of us to fall prey to not checking more often.

The fact that we find this lowered search in a simple game also shows that this dogmatism isn’t just a feature of specific opinions but may be driven by more fundamental cognitive characteristics.

Importantly, the differences between more and less dogmatic participants in our study are subtle. We also do not know how information seeking with personally relevant material like news stories might alter our results. Lastly, it remains unclear what comes first: dogmatism or reduced information-seeking?

Regardless, our research tells a cautionary tale, whether we consider ourselves to be dogmatic or not. When we are unsure about something, we shouldn’t just run with it. Rather, we are often better off checking a reliable source.

COVID-19 and the Science of Risk Perception

Throughout this coronavirus pandemic, mainstream media, national governments, and official health organizations have been broadly united in their recognition of COVID-19 as a serious threat to public health. This apparent consensus, however, belies the level of disagreement within national populations.

From conspiracy theorists who reject the very existence of the virus on one end,6 to people suffering from the debilitating effects of COVID-19-related health anxiety on the other,7 people’s perceptions of the risk posed by COVID-19 vary enormously.

As governments try to balance controlling the spread of the virus against keeping their economies moving, rates of infection are steadily rising. Controlling infection while keeping riskier sectors of the economy open (such as hospitality) depends in large part on public compliance with behavioral measures designed to control the virus.

While behavioral science has uncovered many determinants of behavior beyond beliefs, attitudes, and intentions, it remains true that people who perceive lower risk from a hazard devote less energy to mitigating that risk. This has been borne out in recent research showing that people engage more in protective behaviors, such as handwashing and physical distancing, as their perceptions of COVID-19 health risks go up.4

This highlights why it’s important that risk perceptions towards COVID-19 aren’t radically out of step with the best available science. Highly skewed perceptions of COVID-19 risk within a significant portion of a population could undermine efforts to keep infection rates under control.

What factors shape COVID-19 risk perceptions?

Policymakers rely on formal criteria for measuring risk. Where infectious disease is concerned, this typically involves multiplying the probability of infection by some measure of its negative effects on health. The rest of us form judgments about risk through a messy combination of cognitive, emotional, social, and cultural processes. These can yield outcomes very different to those produced by formal assessments.

Harnessing insight into these informal processes, a recent study sought to pinpoint how much each of a suite of potential factors may be driving disagreement in COVID-19 risk perceptions in the populations of 10 countries. Countries included were as culturally and geographically diverse as the United Kingdom, United States, Sweden, Mexico, and Japan.4

Certain findings from this study are unlikely to raise many eyebrows. People who reported having had the virus, for instance, judged it to pose greater risk than those who didn’t, while those who trusted scientists and medical professionals more also perceived greater risk on average.

Of all factors considered, however, what explained the largest amount of “variance” (or variation) in COVID-19 risk perceptions was how individualistic versus communitarian people were in their political outlooks, measured by their level of agreement with the statement, “the government interferes far too much in our everyday lives.”

This means that if you want to predict someone’s perceptions of COVID-19 risk—which in this study concerned judgments about the chance of becoming infected and the seriousness of disease symptoms—the most important thing to know is their overall attitude towards government intervention in daily life.

COVID-19 control measures such as enforced physical distancing, mask-wearing, and strict business regulation are precisely the kinds of “interference” individualists might be expected to disapprove of, and communitarians welcome (provided their communities as a whole—whether local, regional, or national—could benefit).

What isn’t so clear is why individualism/communitarianism should predict beliefs about infectiousness and symptom severity—facts which stand independent of political preferences.

What, then, might explain this finding?

Motivated reasoning and risk perception

A theory known as the cultural cognition of risk offers a possible explanation. Grounded in the psychology of risk perception, it states that people evaluate risk-relevant information in ways that affirm their pre-existing “cultural worldviews,” of which their individualism or communitarianism is a defining feature. This theory has been most profitably applied to explaining differences in people’s perceptions of environmental risks; most notably, climate change.8

Individualists are said by cultural cognition theory to implicitly recognize that climate change risk legitimizes the sort of government “interference” (e.g. taxes on high-carbon vehicles) that they dislike. This in turn makes them less accepting of information that credits this risk. Communitarians, on the other hand, are expected to be sensitive to climate change risk precisely because it invites the sorts of restrictions that can help keep communities safe. Leaving the solution of collective problems to profit-seeking enterprises is anathema to the communitarian worldview.

To test this theorized link between worldviews and risk perceptions, one study had individualists and communitarians rate the validity of a report stressing the risks of climate change under two conditions: 1) when this report went on to recommend geoengineering—an industry-led initiative—as the optimal solution, and 2) when it recommended government-imposed caps on carbon emissions as the best way to reduce the risks.9

As expected, individualists who read the report recommending emission caps were more skeptical of the information on climate change risk than individualists who read the version advocating geoengineering. The opposite pattern of findings was found for communitarians, who were more skeptical of climate change risk information in the geoengineering condition. As predicted by cultural cognition, this suggests that when the actions thought to follow from crediting a particular risk are more hostile to our worldviews, we are less inclined to credit that risk.

This is not to say that people cynically look ahead to the practical implications of acknowledging a certain risk, review the compatibility of these implications with their political commitments, then consciously adjust their risk perceptions accordingly. Rather, several features of psychologically normal information processing prime our minds for motivated reasoning.10

One such feature is our biased assimilation of information. We disproportionately seek out, notice, and remember information that supports our pre-existing beliefs, attitudes, and values.11 This bias in how we filter information magnifies arguments and evidence that fit with our worldviews, while minimizing those in tension.

Related to this is the phenomenon of motivated skepticism. This is our propensity to implicitly counter-argue information that threatens our worldview, while accepting uncritically information that supports it.13

It’s easy to see how these two features of cognition—biased assimilation and motivated skepticism—could interact with what’s been referred to as the “infodemic”14 of conflicting information on COVID-19 online in such a way that people’s view of the facts ends up aligning with their overriding worldviews.

This is further fuelled by our tendency to find arguments and evidence more credible when we believe that the person or institution delivering them shares our worldviews.3 After all, these are, by virtue of sharing our worldviews, the very people most likely to tell us what we want to hear, reinforcing the effects of biased assimilation and motivated skepticism.

All this is to say that when we encounter contradictory claims about COVID-19, we will automatically attend to, less critically accept, and better remember information that fits with our overarching attitudes. In turn, we are drawn to information that undermines the rationale for responses we find politically unpalatable. This is particularly true where worldview-affirming information comes from a likeminded source, which it typically will.

For the committed individualist, these could be arguments that downplay the health risks of COVID-19 by drawing misleading comparisons to the regular flu, or conspiracy theories which claim that fatality statistics have been artificially inflated to further vested interests.5 Judgments made about COVID-19 risk informed by this information would then legitimize resistance to the very behavioral measures that motivated these judgments in the first place, completing a sequence of events that brings worldview, perception, and behavior into alignment.

Communicating risk across political divides

All this begs the question, what can be done to communicate COVID-19 risk in ways that avoids triggering political sensitivities?

One strategy shown to be effective is framing information in ways that affirm, rather than undermine, important political values.2 Where individualists are concerned, this might mean highlighting ways in which the health impacts of COVID-19 can diminish people’s ability to live life on their own terms. Hospitalization and illness can be greater limiters of self-determination than many of the behavioral measures countries have implemented to limit the spread of the virus. By drawing attention to this, the effects of COVID-19 could be framed as a threat to individualist values without needing to distort the facts.

Another strategy would be to ensure that accurate COVID-19 risk information is communicated by people and institutions with a broad range of political credentials. If risk communicators are seen to be biased—perhaps because they’re thought to represent only a single group—distrust is likely to follow. Information communicated by diverse voices should find the widest reach.

It’s particularly important that these principles are applied by the fact-checking organizations who are tasked with dismantling the deluge of misinformation about COVID-19. The vital role these organizations play in decontaminating our risk communication environment can succeed only to the extent that they’re trusted. And trust will only be invested by politically diverse publics if fact-checkers maintain a reputation of political neutrality. Inevitably, all effective fact-checkers must sometimes find fault with claims congenial to the worldviews of certain groups. Thanks to the dynamics of motivated reasoning, this risks losing the trust of these groups. It is essential, therefore, that fact-checkers are seen to be even-handed in their scrutiny of claims made across the political spectrum.

It’s also worth exploring how alternative models of fact-checking might raise trust without compromising accuracy. For example, one study had three expert fact-checkers and a politically diverse sample of laypeople rate the accuracy of different information about COVID-19.1 The results showed that panels of 10 politically-balanced laypeople tended to agree with the expert fact-checkers about as much as the experts agreed with one another, suggesting that the fact-checking of non-expert panels aligns well with expert opinion.

Even more striking was that while experts conducted deep dives into the original articles presenting this information, non-expert panels made their ratings based only on article headlines and lede sentences. This suggests that by leveraging the “wisdom of crowds,” relatively accurate fact-checking could be achieved quickly and cheaply by members of the public, simultaneously sidestepping the problem of perceived bias and speeding up fact-checking such that it matches the scale of the COVID-19 infodemic that besets us.

However effective our fact-checking systems, some misinformation will inevitably slip through the net. This is where social media companies have a responsibility to encourage their users to exercise discernment when sharing information about COVID-19 on their platforms. A recent study found that getting people to briefly reflect on the accuracy of an unrelated headline substantially increased the ratio of true vs. false headlines about COVID-19 they were then willing to share.12 This was before they were told which were true and which were false. The researchers speculate that social media users often choose what to share based on goals other than accuracy, such as obtaining “likes” and other positive reinforcement. Users nevertheless do care about accuracy, which is why when nudged to reflect on it, they are less likely to share content they suspect might be untrue.

Incorporating a feature on these platforms that reminds users to consider the likely (in)accuracy of the content they see should help mitigate the sharing of false information, particularly where this might otherwise be motivated by a desire to gain positive reinforcement from others who share our political outlook.

Concluding remarks

It’s not possible to eliminate bias from human reasoning, nor should we wish to leave politics out of risk management. Judgments on how much we should care about a given issue are necessarily shaped by our personal values, which must also be central to decisions about where and to what extent we’re willing to make sacrifices in managing certain risks. Issues can arise, however, when we are forced to reason in a risk communication environment in which misinformation abounds. Here, motivated reasoning can unconsciously push us towards conclusions on matters of fact that diverge sharply from what the science tells us. When our understanding of relevant facts are skewed, we’re less able to act in ways that best protect the things we care about, whether that’s community wellbeing or individual freedom.

It’s critical, therefore, that political leaders and public health organizations take account of the science of risk perception when developing communication strategies, particularly when dealing with a global pandemic. If carefully designed and evidence-based, these could short-circuit the distorting processes of politically motivated reasoning before they gain traction. There’s also more that social media companies could do to help combat the current COVID-19 infodemic, from crowd sourcing fact-checking processes to nudging their users to pay greater attention to the (in)accuracy of the content they’re inclined to share.

These efforts to better communicate COVID-19 risk could help deescalate growing political polarization in people’s beliefs about the virus, bringing us closer to a common understanding that will enable a more coordinated, and ultimately more effective, response.

Taking a Hard Look at Democracy

Introduction

Tom Spiegler, Co-Founder and Managing Director at The Decision Lab, joins Nathan Collett to talk about what behavioral science can tell us about the 2020 U.S. election and the state of democracy more generally. Some of the themes we discuss include:

  • The idea of a social contract: whether it is relevant and accurate given cutting-edge findings in behavioral science. 
  • Robust voting behaviour: how and why people actually can vote for themselves
  • Cognitive diversity: the idea that we all think differently, and this influences the ways that we can come together to make group decisions.
  • Our weaknesses to biases and frames, and how that influences our deliberative abilities
  • The essence of democracy for a modern voter: the minimalism of voting and the power of collective action
  • How different legal philosophies are based on differing notions of human agency
  • Taxation reform, and how mindset shifts may be consequential for restoring trust in government

Discussion

Part One: Fundamental Issues and the Social Contract

Nathan: For the latest installment in our perspectives project, I’m sitting down with Tom Spiegler, one of the co-founders at TDL, and someone whose interests lie at the intersection of behavioral science and public policy. 

Nathan: Tom, it’s been about a week since the U.S. election, arguably one of the biggest moments for democracy in the last couple of years. What are your initial thoughts?

Tom: What a wild ride. I think something that stood out to me—and something that has been becoming more troubling—is our inability as a society in the U.S. to come to any kind of collective agreement. 

Tom: To that point—I saw an interesting statistic going into the election: that 50% or so of Republicans and Democrats thought that the election would be fair leading up to it. But after the election, now that it has become clear that Joe Biden won, that number jumped up to something like 75%-80% of Democrats who now think it was fair as opposed to something like 25% of Republicans.

Tom: Understanding human behavior is really fundamental to evaluating the relative strength of a democracy. The functioning of democracy relies on norms, conventions, and expectations of people’s behavior. And, as a result, numerous psychological processes contribute to the stability or instability of democracy.

Tom: To my point about a lack of collective agreement—I want to distinguish that from disagreement, which is a good thing. I think disagreement, political disagreement, in particular, is a form of engagement that is healthy and necessary in a democracy. 

Tom: But the problem is that now we live in an era with multiple parallel news ecosystems. With the news media, you have Fox on one side, CNN and MSNBC on the other side, and social media algorithms pushing the more extreme views to you or views that you might already be inclined to agree with—you’re seeing that this has created bubbles of people living in totally different worlds. 

Tom: So,we’re not really disagreeing with each other anymore, just having separate conversations. And I think that that’s worrying. I mean, there’s evidence in the behavioral science world that the strength with which you hold an opinion is proportionate to the extent to which you believe it’s shared by others. I think now with the media and social media landscape, that signal has become distorted and you see extreme views moving into the mainstream, legitimized by what you think is a majority endorsement.

Nathan: Yeah, that’s fascinating. I don’t think I’ve thought about it like that before but it is way harder now to tell what the majority of people are thinking just because, one, there’s so much information, and two, like you were saying, social media algorithms and also just kind of the structure of our social networks mean that we get information that isn’t a perfect representative sample of the rest of our society.

Nathan: We’re definitely going to come back to a couple of those themes but what I’d like to do first is take us through, starting at the fundamentals of democracy, working with the idea of a social contract, the idea of agreements and discussions. Then, moving into more ideas of cooperation and collective action. Finally, trying to bring out some behavioral science solutions to some of those current issues that we just talked about. 

Nathan: Okay, so starting on part one, the idea of a social contract, the idea that we freely come together as perfectly autonomous individual beings, each with free will and the ability to think for ourselves, to act in our own best interests. Do you think behavioral science brings a challenge to that at all? Are any parts of the idea of autonomous free agents who are able to make decisions independent of any influence by other people, that is challenged by behavioral science, or do you think that’s still a solid paradigm for understanding things like the election?

Tom: I think one thing that behavioral science brings out is that a lot of what we do is shaped by the situation that we’re in. We have the perception of free will and autonomy and, to a certain extent, of course we are free creatures that make independent decisions—but, several experiments have shown just how much you can be influenced by the situation that you’re placed in. And I think, especially at the policy-making level, we should understand that a little bit better.

Tom: Underlying a lot of our laws—especially our criminal justice system—is this presupposition that people have really robust character traits that predict how they will act in the future, and explain why they acted in a given way in the past. But from behavioral science research, we know that there is a tendency to infer wrongly that actions are due to distinctive robust character traits rather than to aspects of the situation. I think lawmakers, policy-makers, even judges, need to be more aware of how people are subject to a `fundamental attribution error.’ 

Nathan: That’s really interesting. It reminds me of something that Cass Sunstein wrote a long time ago on moral heuristics and how we arrive at certain notions of justice, and other things that underlie legal structures, based on these assumptions that we have. I think one example had to do with punishment and whether we punish people, even when there’s no benefit to other people in the situation, from this criminal receiving punishment and whether we still think it’s a good thing just out of a sense of retribution. I think he was making the case that ultimately, we’re making these snap judgments about what’s right and wrong that are disconnected from the outcomes.

Tom: I’d agree with that. I think that there’s a good case to be made, based on behavioral science, for a to shift from a kind of retribution-based criminal justice system and legal system to a more rehabilitative one.

Nathan: In your experience in law school, how are you taught about the value of retributive versus distributive justice systems? 

Tom: I think what marks a lot of the disagreement between the two camps is a difference in understanding human behavior. I think the retribution-based camp thinks very much in terms of character traits—if someone acts a certain way or commits a crime, it’s because they have a character trait that leads them to have a propensity to rob a store. The rehabilitative camp thinks that human action is more malleable. So, just because someone acts a certain way in one circumstance doesn’t mean that they’re that kind of person. Rather, a confluence of situational factors, among other things, caused that person to act in that way, and therefore we shouldn’t be locking this person up for 20 or so years at great expense to the taxpayer, when this person could be rehabilitated and contribute to society.

Nathan: Coming back to what you were saying earlier about how each one of our actions are affected by the people around us, regardless of whether traits to commit a crime are embedded or not within us, I think you get into a very complicated question of agency. When you commit a crime, is it your fault? If you’re in a situation for which we can label and define a lot of the inputs that cause or at least made it far more likely that you’d commit that kind of crime, is that still up to the individual to be wholly responsible?

Tom: I think that’s a hard question. I think it’s troubling to take the extreme position that you have no agency over your actions—we have to encourage responsibility-taking as a society. But, I do think on the other end, though, we do need to recognize that it’s a lot easier to make good decisions if you’re in a privileged position in society. 

Tom: Oftentimes, what you see with those that end up in prison is that the deck was stacked against them. Sure, they made the ultimate decision to commit a crime, but there are a ton of situational factors that influence these things. Research shows there’s a real poverty-to-prison pipeline—and it’s not because those born into poverty have worse character traits but rather because of things like pervasive discrimination and aggressive policing of poor communities, among other things.

Tom: And then what happens is that once people get released from prison, often after 10, 15, 20 years, they are dropped back into the world with few skills and no support network. Combine that with employers’ reluctance to hire felons, banks being unwilling to loan money, and what you see is high-ish rates of recidivism. And then when that person commits another crime, those who would support longer prison terms will point to that and be like, “Look, this person was always the type of person who would commit a crime. Why are we letting them out at all or why are we even bothering trying to rehabilitate?”

Tom: To that point—there’s an interesting story about one of the professors who taught at my law school—Shon Hopwood. He grew up in Nebraska and he committed a number of bank robberies and went to prison when he was younger. I remember him talking on 60-Minutes, about how when he got out of jail in 2008, he had never seen an iPhone, never been on the Internet, and was computer illiterate. He was a rare success story but for the majority of ex-felons, how do we expect them to succeed under these circumstances?

Nathan: That’s super interesting. Okay, so pivoting a little bit. I want to talk a little bit about cognitive diversity. The way I’m using this word is just the pretty simple idea that we all have slightly different brains, slightly different ways of thinking about the world, but also, slightly different external attributes, too. It’s pretty universally recognized that we have physical differences among us, but I think there’s this idea that we also have varying levels of pretty intangible things like intelligence or creativity or persuasiveness or charisma. I found that the idea of cognitive diversity brings a bit of a challenge to, again, these traditional models of legal systems and democratic systems.

Nathan: Big question, but do you think that there is a challenge here where when you have people with varying levels of persuasive abilities, whether that’s based on their upbringing or just innate characteristics, does that variance influence the construction of group identity? If you have a group that’s supposed to come together to make this social contract out of their own deliberative action or out of everyone’s individual choices and their free will, when some people are by nature more persuasive or more well-liked or just able to process these ideas at a faster rate than other people, does this influence the creation of a social contract?

Tom: So what you’re saying is that because everyone has cognitive diversity, and some people are more persuasive, more intelligent, more charismatic, this affects group decision-making?

Nathan: What I’m suggesting is that the one person, one vote way of understanding democracy is that everyone will come to the voting booth and make a decision. However, when you think about when people are making their voting decision, you have to think about what factors are they taking in? They’re probably taking a lot of cues from whoever they decide are the experts and the people that are going to be those experts are likely to be persuasive people. Right?

Nathan: Do you think that ability to be persuasive would be random and distributed among the population or do you think that persuasive people will also tend to be the ones who have certain interests that other people don’t? For example, would more persuasive people tend to be more wealthy and therefore kind of favor the interests of a more elite class of citizens?

Tom: That’s an interesting point. I think you’re right. Western society, U.S. society in particular, clearly values traits like intelligence, persuasiveness, and charisma. Those people tend to rise to the top. And I think if you polled the interests of the 1% compared with the interests of the bottom 99%, you’d see some kind of difference there. 

Tom: But I think that it is more complicated. Voters are more robust, or more complex than people think. One of the things that has encouraged me about the past four years or so is this resilience. I think if you asked me in 2016 what was the main issue that we need to deal with before all else, I would have said money in politics. Because at the time, I was worried that if the 1% can buy virtually unlimited amounts of TV ad time or social media ads—and if that’s enough to convince voters of whatever their point of view is— then we don’t really live in a democracy. 

Nathan: A system driven by money. 

Tom: Exactly. But that hasn’t actually been the case and when you look at a lot of the races that have happened this time around, in 2020, a lot of the very well-funded candidates didn’t end up winning. And actually, many of them didn’t even end up coming close to winning. I think that points to the voter being more complex than we might think, and our democracy being more robust than we might think. If nothing else, it’s definitely shown that voters are real people that think for themselves and are more complicated than just a demographic category. And I think that’s encouraging.

Nathan: Absolutely, that is a really good rejoinder there. I wonder if there are things we can do to amplify that diversity of information absorption if you’re thinking about them getting information from the neighbors and also experts and also social media? Is that a goal that we should pursue? Is that something we can alter or is it so robust that behavioral science interventions or other interventions can’t really do much to change that?

Tom: To change how people are getting their information?

Nathan: To change what information leads to political decision-making.

Tom: There needs to be an effort on the part of the social media companies and the media to do a better job of representing the whole country, because I do think that it’s relatively easy to only get fed a certain viewpoint.

Nathan: That makes me think of a bit of a manipulative way of providing information to someone. I mean, that’s a pretty well-known behavioral science intervention. Using social norms to tell everyone, oh, well, most people think this way, you should, too. Is that the kind of thing that should be stopped in politics? Should we be worried about people’s ability to harness behavioral science?

Tom: Yes. Especially when you’re using behavioral science at the government level for things like voting or convincing your constituents of anything, I think that we should be looking at how it’s being done. I think there’s a fine line between using behavioral science for good, say to increase voter engagement, and then to prey on these cognitive biases and vulnerabilities by leveraging confirmation bias or hindsight bias or all of these things that we know voters are susceptible to, like the bandwagon effect.

Tom: One of the things that I’ve started to look into, related to voting patterns and cognitive biases, is: does an over-emphasis on polls in the media affect voter behavior? The bandwagon effect, for example, is something that’s pretty well documented, which essentially is the concept that if something’s gaining popularity, people will look at that and be like, oh, I’m going to be for that, too, because I want to be on the winning team or I want to be supporting something that lots of other people support. 

Tom: When you have a ton of polling, especially in the early stages—and I’m thinking here about the democratic primary elections earlier this year— so much of the focus is on who’s more electable, who do the polls favor, and so early on. I think there’s a possibility that voters look at that and it’s like, oh, I’m going to support this candidate who’s ahead, even though not a single real vote has been cast yet. And this is even worse if polls aren’t even accurately measuring public sentiment.

Tom: What we should be aiming for is an informed democracy and when we’re focusing on who’s ahead and who’s not, like a horse race type thing, we’re doing voters a real disservice and there’s a lot of psychological biases at play there that can have adverse effects. 

Nathan: Yeah, this is super interesting. I think what I’m getting out of this thought is that those early interventions you were talking about, the kind of positive, non-partisan ones, are about making voting as easy as possible or at least facilitating the democratic process.

Tom: Right, how to increase turnout or engagement. 

Nathan: Let’s bring in an interesting concept to make sense of this. We’re going to take Daniel Kahneman’s famous two systems approach. There is this intuitive, quick, automatic System 1, which is engaged in tasks like understanding someone else’s emotions. Things that you don’t have to think about, versus System 2, which would be like doing double-digit multiplication, let’s say 13 times seven. As soon as you have to do that problem, you’re in System 2. You’re doing deliberative work. 

Nathan: I think one goal of an ideal democracy could be to get those positive interventions that you were talking about, those System 1 things, working. Get people to the booth. Get people thinking about democracy and doing democratic actions without thinking too hard about it. But then, and here is something new, then getting System 2 engaged once they’re making those decisions and not letting them get tripped up by confirmation bias or bandwagon effects. Having people change their mindset and go, “Oh, wait, I need to actually think about this. I need to process this in a way that is meaningful and isn’t just the first thing that comes to my mind.”

Part 2: Collective Action and Cognitive Restructuring

Nathan: So let’s get a bit more specific. When we’re thinking about democratic actions and things that are important to a voter, what does democracy feel like to an average person? When is someone doing a democratic act? When is someone actually participating in democracy in our modern, large-scale society?

Tom: That’s a good question and it’s something I’ve been thinking about a lot. We’ve been hearing a lot about how our democracy in the U.S. is at risk. And maybe it is. But I think in a weird way, what I’ve seen—at least anecdotally, living in DC these past four years— is that our democracy is as strong as it’s ever been. I mean, over the past 4 years, I’ve seen so much political engagement, so many political marches, and political movements. So many younger political candidates unseating establishment incumbents despite having way less campaign money. And, speaking of campaign money—grassroots donations have been through the roof for both sides, with political sides being able to raise a lot of money from the average voter, in $16 increments, $20 increments, which I think shows a lot of engagement. 

Tom: To your question what does democracy look like, to my mind, I think it is tuning in to what’s happening in the country. It’s getting out there and protesting or going to marches for what you believe in. It’s getting out there and making your voice heard, it’s voting. And looking at the state of things right now, you see all of this in abundance. This year we had a record-breaking voter turnout. President-Elect Biden got the most votes of any political candidate in the history of the country, and I think that’s a sign of a healthy democracy. 

Nathan: Until the past couple of years, democracy sometimes feels like just kind of calling someone every four years to get them to come to a voting booth, checking a box and going home again. I think there are some real problems with that when you think about some of our more cutting-edge findings in behavioral science about expectations and how people understand cause and effect and how people are susceptible to suggestion. Just to take one, for example, I mean, when you have two things that are very distant in time from each other, it’s harder for people to understand that they are linked causally. Voting strikes me as something in this kind of structure, where you vote and you have your elected officials, but any sort of consequences of the vote doesn’t seem to land right away. Do you think there’s a way of improving that? Do you think that is a problem here?

Tom: I guess there are multiple parts to it. Just to tackle one aspect of this question—there’s the lag in time between voter action and the effect of whatever policies eventually do get implemented.

Tom: This disconnect is always an issue, and I think that is a real challenge looking into the future. If you look at the environment, for example, a lot of the roadblocks to environmental policy are that the benefits are less visible and more long-term, so a lot of politicians don’t feel like it’s politically beneficial for them to really die on that hill, to pass major legislation that’s going to maybe cost a lot of money in the short-term. 

Tom: It’s not like the moment you pass a law, the earth gets a little bit cooler and everything gets better. The benefits are 10 years down the line, at least, and I think that’s one of the things that today’s politicians have to deal with. If the voter in the short-term sees their taxes go up because of new environmental legislation and then votes that politician out, that’s no good. I think that making the upside of long-term policy more salient for voters is really important. It’s something we’re going to have to get better at. 

Tom: At the policy-level, the question is: how do we make the long-term benefits of a new law or regulation more salient? I don’t know whether we do that through economic policy, rewiring of the incentive structure or through interventions, which can make the future self more salient, or something else. I think that’s something that behavioral scientists in positions of power are going to have to grapple with and I think that public opinion and public willingness to tackle these issues now instead of just putting them off is an obstacle to good policy, and more generally, to good outcomes for all of society. 

Nathan: Not that the two of us right now could just brainstorm the answers to all of these things, but I do have two things that may be interesting avenues to explore. One, more specific, and then one big broad one that we can end on. The first is that I was a little bit involved in local politics this past year. There was an election in British Columbia and one of the major figures in the party that ultimately won had to make a case for why they were calling a snap election because they called this election in the middle of a pandemic. People were quite upset because they felt that the election was unnecessary and dangerous. The politician’s argument went like this. He said, “We can’t make policies with a year left in our term that influence the next government. We can’t lock in our subsequent government, whoever you guys pick three years down the road, one year down the road, we can’t lock that government into certain commitments to whatever their agenda is just because it’s their job to govern when they’re in power.”

Nathan:The argument was, this politician was saying, we need an election because we need a mandate to institute this substantial economic reform plan to address long-term solutions to COVID-related economic issues and we can’t do that in the next year, we have to have a three-year mandate to do that. It made me think about how we could potentially give politicians a mandate to solve bigger issues that need long-term commitments. So this is my second, bigger idea. It is about trust and about continuation. I think what my local politician was getting at is that he has to protect the agency of this hypothetical future government, but also he needs to be able to act boldly to tackle the crises of our time. Somehow, there needs to be trust between the people in power,  and between the different, opposed segments in society in order to fix these big problems.

Nathan: To elaborate a little on this, there needs to be two types of trust. Trust in a vertical sense means that we need to trust the people we put in power enough that they can implement policies that we may not appreciate right now, so long as they are really confident that they are in our interests in the long term. In a horizontal sense, there also needs to be trust. Political opponents need to believe that the enemy are good people, to the extent that they will not destroy the achievements of their predecessors, and fall into a trap of going back and forth, instead of working together to solve these big problems of collective action. 

Part 3: Modern Solutions to Ancient Problems

Nathan: Going into our third section here, we can start where we left off and then move into some more concrete solutions. From what we were saying before, we were talking about problems, like environmental issues, and other problems that have long-term solutions that don’t pay dividends right away. How people have to agree on solutions to those and agree to take some small hits now to fix them later down the line. I’ll call that collective action, just kind of group behavior in the pursuit of group benefits down the line. I often compare it to a prisoners’ dilemma type of situation. If we all act selfishly right now, we’re going to end up with these suboptimal outcomes, but if we can all cooperate, then we get these good outcomes. What do we know from behavioral science, from policy, or just from just personal experience in a democracy, what works, what gets us to those collective outcomes, what do we need in terms of trust, in terms of community, in terms of just deliberating and discussing with one another? 

Tom: People need to trust the system. If you have a system where policies that are supported by the vast majority of people, say 75% plus, are not being implemented, I think people lose faith in the collective and start to act more individualistically. 

Tom: The current status quo creates this mindset of, like, if the government doesn’t have my back, then I need to have my back. Starting with Reagan onwards, there’s been less and less trust in the government, less and less faith that the government can do a good job, and government has become synonymous with bad or inefficient or ineffective. I don’t think that it needs to be that way. I don’t think that the government is inherently bad, but if the people that are running it are inefficient or ineffective or otherwise, then it’s going to lead to that impression.

Tom: So step one is to rebuild faith in the government. I mean, I think if you look at polling, government officials, people in congress, rank the lowest or second-lowest in terms of trust. I think below even lawyers, just above car salespeople. But if the government suddenly started passing policies that are favored by a large majority of Americans, aimed at improving the lives of the average American, you’d see that mindset shift. 

Tom: One interesting thing that really shows how little faith Americans have in their government is the issue of taxes. Americans hate taxes, even more than people in other countries. And when you think about what a tax actually is, it’s paying money that then gets redistributed for social services or infrastructure, and the like. 

Nathan: All of which are projects that hypothetically benefit society as a whole.

Tom: Yep. But people don’t feel good about giving money to the government, whereas people do feel good about giving money to a charity, like the Red Cross. Even though the Red Cross does a similar thing to what FEMA, for example, might do—and that to me is an interesting dichotomy that I think needs to be looked at. 

Tom: One of the big conversations in the US is tax reform, tax reform, tax reform, because we need to raise more money in taxes. But before instituting a new tax, what about collecting all of the tax that is currently owed? When you look at the amount of tax that is owed to the government versus what is actually collected, there’s really a sizable gap there of billions of dollars. I think it’s impossible for the IRS to really police that. There are too many people in the country for that.

Tom: We need to get to a place where people want to pay taxes and it can be done if you look at some other social institutions that have similar projects. People want to give money to charity, people feel good about giving money to charity, but people don’t feel good about giving money to the government, even though the money is used for similar things. 

Tom: I think making taxes easier to do, maybe making tax day a national holiday, maybe pre-filling the forms, maybe giving people some choice over how that tax money gets used. There are a number of ways in which compliance could be increased by changing the frame through which people understand taxation. 

Nathan: Do you think that would make a difference, you think getting that information to them, telling them: here’s exactly how we spent your money? There is another example from my local political scene, about a recent tax that was super unpopular. It was called the School Tax because it was taxing people who owned very high-value property, but they named it after what it was going towards, which was support for the education system. In this case it still had a negative kind of affect associated with it. The frame didn’t seem to overcome that stigma. Is that information valuable? Do you think that shapes public opinion in a way that’s meaningful?

Tom: I keep going back to this, but people enjoy donating money to charities. There’s been a lot of behavioral science discussion over altruism and whether charitable behavior is motivated by altruism, and if so, what even is altruism? The research actually shows that there’s this kind of warm glow effect when you donate to charity. You feel good when you choose to give money away, so how do we get to a place where people get that warm glow when paying their taxes? That, to me, is an interesting problem to solve— and I do think it’s a very solvable one. 

Tom: Maybe naming the tax after what it’s going towards isn’t descriptive enough or doesn’t do enough, but I think it’s at least a step in the right direction, even if it didn’t move the needle that much in your example above. 

Tom: I think to sum up: trust in government is important because without faith in the collective, people act in an overly individualistic way,  which leads to a tragedy of the commons situation. As in, if I think other people are going to be using this resource if I don’t, I’m going to exploit it, I’m going to take advantage of it because I can. If everyone thinks like that, we’re not going to be able to become sustainable or hit the environmental targets that we need to. I think it’s the government’s role to step in and change the incentives.

Nathan: All right. I think that’s an excellent place to end it. There’s a lot to think about there in terms of whose job it is to make these changes, whether it’s a question of people’s preferences being malleable enough to adopt that mindset or whether even like that reform could just happen easier than we think, where the people, just people taking a slightly different tone about taxation and making policy a collective effort, one that people really feel like they’re involved in, and making representative democracy into something that is properly and feels properly representative. I think behavioral science has to be at the heart of that challenge going forward and you’ve laid out a pretty convincing argument for that. That is all for now. Thanks a lot for joining me Tom, I found this to be a super insightful conversation. 

The Power of Narratives in Decision Making

People have been telling stories to one another for thousands, maybe millions of years. They have told stories huddled around the campfire; they have painted them on cave walls; they have written them down on stone tablets, scrolls, and in books for future generations. People have journeyed from small hamlets to large cities to tell stories, fought wars because of them, and taken huge sacrifices for their sake.

Many different terms have been coined to represent the essential nature of humankind: homo faber (philosophical man), homo economicus (rational man), homo politicus (political man), and, of course, homo sapiens (wise man). Walter R. Fisher, an academic, proposed homo narrans (storytelling man) as an important addition to this list in 1985.

Because whatever the shape, form, or time, storytelling seems to play a major role in human interaction. Recent scientific attention has shed light on the power stories hold in determining human behaviour. Even now, we are still being surprised by the true extent to which stories bear influence in our everyday lives. This article attempts to share what we know about stories so far. 

We think in terms of stories

Stories seem to be hardwired into the way in which we process the world around us. This is what a neurological theory known as the theory of narrative thought (TNT) seeks to explain. The theory observes that the brain processes everyday sensations and things happening around us and compiles sequences of these sensations into events.19,21 Sequentiality is important because when we try to explain phenomena, we implicitly look to the past for their causes. That’s because our observations of the world rely on a linear conception of time: we look for ways in which the past has shaped the present or the present will shape the future, but never consider the future shaping the present. Thus, our understanding of causality is inextricably linked to sequentiality—this happened and then that happened—and sequences are, of course, linked to time.

When the events happening around us are organized by time and by causation, the result is a structure often formally referred to as a narrative: a causal chain of events that flows from the past to the present.

The theory suggests that there is evolutionary value in organizing the world around us into narratives. If you weave links between events in the past to their manifestations in the present (Joe stumbled into a bear’s den, he’s now injured), it can help you extrapolate what may sensibly be expected to occur in the future (stumbling into a bear’s den will probably lead to injury). In evolutionary terms, this can enable us to better understand and recognize threats before they happen, reducing potential harm before it even materializes. 

The evolutionary hypothesis is but one theory to suggest why stories may be hardwired into our way of being. Nassim Taleb suggests the Andrey Nikolayevich Rule, which says that in real-world economic life we are faced with a lot of information that is costly to obtain, costly to store, and costly to manipulate and retrieve. As a result, we simplify: we reduce the dimensions of life through the use of stories.20

According to Taleb, stories help us to make the world “less random than it actually is,” and they can fit in with our incessant drive for sense-making.6,20 Chater & Loewenstein similarly posit that our hardwiring for stories is based upon our dislike for entropy or disorder.6 We try to organize our lives into narratives, to force sense into the randomness around us.

So, there is good reason to believe that we use stories in the way we think.

What makes stories so special?

“Once upon a time.” These four words almost immediately transport you elsewhere and immerse you in elsewhat. And immersion, and we will soon see, is one of the reasons why stories are such effective communication devices.

Using a narrative as a communication device tends to be more effective than alternative forms of communication because it is more engaging, demanding more focus, more attention, and more involvement.

A study by Kilaru et al. (2014) shows that narratives are processed differently than other forms of information.13 They argue convincingly that we process stories the same way we process first-hand experiences; narratives invite you to mentally rehearse the actions within them. Within the brain, most of the same regions are stimulated when someone performs an action as when that person reads a narrative about that action!18

The same study describes an experiment about how narratives affect recall of medication guidelines. What they found was that those participants that had the guidelines explained to them using a narrative were better at recalling those guidelines than those who were simply given the information alone. When a narrative is involved, the brain simulates the actions described in it, making us feel like we are re-enacting them ourselves. This makes the story more memorable because it is as though it was your own experience, and it suddenly becomes intensely personal. 

Not only can stories help in recall, they can also help to persuade others. Adaval & Wyler (1998) have previously shown that stories are processed holistically—that is, you pay attention to the “whole,” rather than to the individual “pieces.”1 And when this happens, you are less likely to come up with counter-arguments. In other words, when someone provides you with all the details, you are more likely to come up with a counter-argument for one of those details than when that person provides the same information as a coherent story. In one of the experiments Adaval & Wyler ran to demonstrate this, participants were asked to evaluate vacation packages.1 Participants evaluated vacations more favorably when they were described in a narrative than when their features were simply listed.

When asked why narratives hold such an advantage, the authors provide two main reasons: (a) because their structure resembles that of information acquired through daily life experiences, and (b) because of the use of a holistic—rather than a piecemeal—strategy for computing judgments.

Stories and consumer decisions

So, if you have a vehicle that is more engaging, facilitates better recall, and generates fewer counter-arguments, it goes without saying that it also wields the power to shape behavior. To expand on how we can implement this power in our environments, I will invite you to consider the following stories (wink, wink).

First, imagine you are buying a new phone. Most assessments of consumer decision-making processes would assume that before deciding on whether you will buy it, you will first examine each piece of information about the phone individually. But in reality, this sort of piecemeal approach is not necessarily what is happening. 

Instead, you might be judging the phone by visualizing a series of events involving its acquisition and its use in different contexts. You might imagine visiting many shops, buying the phone, reading its manual, playing with its functions, taking it on holiday and capturing lovely memories with it. And of course, during this imaginary situation, the basic features of the phone (weight, camera resolution, description, price, etc.) will come into consideration.  They could come into play as you imagine the ease with which it can be stored in your pocket, or how you can save enough to afford it on your salary.

So, the ultimate decision to purchase may not be based merely on the specific features of the phone, but rather the imagined sequence of events as a whole.1 In other words, our choices often rely on narratives.

This is arguably what GoPro does so well. They don’t sell you the features of their cameras. They sell you the swashbuckling, risk-taking, adventurous lifestyle narrative that accompanies their cameras. 

One of my personal favorite studies demonstrated on eBay how simply telling a story about an object, such as a travel souvenir or a name brand product, increased how much people valued it.14 In the world of vintage cars, it is generally known that “barn finds”—cars that have been abandoned and rediscovered years later, typically in a dusty decrepit space such as a barn—sell for more than their non-barn equivalents, even though they are usually in worse shape. The barn narrative wields power.

Narratives and public health

Next, I would like to bring to light a study in public health that documents the “Angelina effect,” which describes how more women had mastectomies after merely reading about Angelina Jolie’s own mastectomy story.8,9 Some may say that this effect only occurs due to some sort of “messenger effect,” which describes how people tend to focus on the person delivering the message instead of on the message itself. Maybe these women decided to have surgery just because they wanted to be like Jolie, and not because of concerns about their health. While this may be the case, health news stories can lead to behavioral change even when they do not involve celebrities, and can also lead to change when they are negative. 

The influence exerted on by stories on public health is not always for the better. In one study, a decrease in ibuprofen use was observed after participants read a single New York Times story about a woman who had a very rare, life-threatening reaction to this common over-the-counter medication.17

And since the world is today in the midst of a global pandemic, the role of stories in the case of vaccinations also comes to mind. One powerful story that has travelled the globe is that of parents relating how the emergence of autism in their young children coincided with their planned MMR vaccinations. The narrative they transmit is an extremely poignant one: if only I hadn’t vaccinated my child, they wouldn’t have autism today. Even though there is no evidence linking vaccines to autism, this message has spread very effectively from an extremely small number of sources, such as actress Jenny McCarthy, and has had a huge impact on vaccination choices in the U.S.4,5,7

Narratives in business

Finally, for a case of narrative implementation in business, we travel to a call center which was having trouble, as many do, retaining staff members. Working in a call center is tough: you have to deal with angry, impatient customers all day, and the nature of the role is very repetitive. So, most call centers have high levels of staff turnover, costing them significant amounts to constantly hire and train new recruits. 

But when researchers decided to provide job applicants with “realistic previews” of the very worst parts of the job—in this case, real experiences narrated to the applicants—something surprising happened. Job turnover rates decreased, and in some instances, job satisfaction increased.10, 11, 12

This is a good example of the power and usefulness of narrative transportation. The realistic previews are a form of narrative, and when you engage in a narrative you are invited to re-enact it. By re-enacting the worst parts of the job, you are, as an employee, preparing yourself for them. New employees will thus know exactly what they are getting themselves into, and won’t be disillusioned. 

It is also important to note that this impressive effect was not a result of the self-selection bias, which arises when individuals select themselves into a group. Because the narrative previews were provided after candidates made their decision to join, but before they started the role, it is safe to assume that there was no pre-existing difference between the groups that could have been responsible for this effect. So, these previews ended up being a practically costless yet powerful implementation of narratives.

Happily ever after?

So, stories are a fundamental part of who we are. Homo narrans not only thinks in terms of stories, but is gripped, entranced, transported, and influenced by them too. Clearly, their power is such that in many scenarios they will be the device of choice. But care must be taken when trying to wield them as tools to change behavior. This is where applied behavioral scientists should come into play: through rigorous testing and an understanding of the nuances of literature, powerful stories can be crafted to ethically avoid deleterious consequences, while keeping readers enthralled.

Nudging Consumers Towards Big-Picture Thinking

The desire for instant gratification, and the battle to tame this base impulse, is a universal struggle. As the wise 13th-century Persian poet Rumi once said: “The intelligent desire self-control; children want candy.”

Most animals, when presented with a treat, will gobble it up immediately. But as humans, we boast about our ability to delay gratification, though often this is because we believe a larger benefit awaits us in the future if we control ourselves.

It’s not a perfect skill by any means—but I am usually slightly better than my beagle at savoring a treat.

A theory of minds

How are we able to achieve this? A leading theory holds that it is our ability to read someone else’s mind that holds the key to the answer. Now you may be picturing the mentalist Derren Brown waving his fingers and staring intently at a volunteer from his audience, but what I am referring to is something called the “Theory of Mind.” This is the ability to imagine yourself in another person’s shoes, so to speak, though it also applies to being able to put ourselves in our own shoes in an imagined future.

Theory of mind (TOM)1 is defined as the ability to attribute mental states—beliefs, intents, desires, pretending, knowledge, etc.—to oneself and others, and to understand that others have beliefs, desires, intentions, and perspectives that are different from one’s own. This clever little trick, in my opinion, is what really differentiates us from the animals, and is the foundation of empathy. Sure, it has been found that apes and even rats have some form of this ability,2 but we are the ones that really excel at it. And most movies would be rather boring if we didn’t.

So, does TOM allow a person to project their future self to their current self? For instance, does it allow me to mentally picture my future self having trouble fitting into my jeans after eating that third donut, and thereby stop me from picking it up? Or, in what is known as temporal discounting, does it stop me from taking a pile of money now instead of waiting to get an even bigger stash of cash later?

A cunning experiment

To test this theory a group of scientists at the University of Zurich and the Heinreich Heine University conducted a clever experiment.3 They looked at an area of the brain called the temporoparietal junction (or TPJ for short), the region believed to be all-important in allowing us to form TOM.4 

They figured that if you could switch this area off, people should also have difficulty choosing options that have a larger future benefit over an immediate benefit. They used a method called disruptive transcranial magnetic stimulation (TMS),5 where a magnetic coil placed near the skull produced small electric currents in the brain and inhibited the activity of the TPJ.

They then carried out two tasks with their subjects. In the interpersonal task, subjects made choices between a selfish reward for only themselves (from 75 to 155 Swiss francs) and a prosocial reward that was equally shared between themselves and another person (75 Swiss francs each).

In the time-based decision task, subjects chose between a variable reward (0 to 160 Swiss francs) given immediately and a fixed reward (160 Swiss francs) received after a delay of 3 to 18 months.

Subjects with an inhibited TPJ (brain switched off with the big magnet) were more likely to take the money upfront, rather than delay gratification and wait for a larger prize. They were also less likely to share the money with another person.

What the researchers essentially found is that we weigh the needs and desires of our current self against the needs and desires of our imagined future self. To quote the study’s author, “The function of perspective-taking is essential to both of these tasks,” in terms of both “thinking how someone else would feel if you give them money and also how you yourself in the future would feel with that money.”

The Theory of Me

What are the implications of this?

Investing in your future is hard.6 It is a trade-off from missing a reward right now versus gaining one later. The easier it is to picture yourself with those future rewards, the more you can empower self-control. Advertising often fails on this, simply spewing out features and benefits without really allowing the audience to take this first-person perspective. Affording a first-person perspective takes the load off the individual having to project themselves into the future and allows them to be emotionally engaged.

Using accurate metaphors and concrete illustrations is one way to help envisage a consumer’s viewpoint and emotionally transport their outlook.7 For instance, showing an older version of the target audience enjoying their retirement with their family makes saving more salient, or displaying an avatar of what you would look like after following a gym plan would make getting up earlier in the morning to hit the weights a bit more motivating.

So, I would like to call this the “theory of me.” I urge that whenever you are trying to get somebody to do something in the future, use the “theory of me” and make it easier for them to see themselves in their future skin. By making it more pleasant and salient, you are going to make that decision much easier.

Protecting Your Projects from Cognitive Bias

Imagine it’s 1990, and West and East Germany are doing the previously unthinkable: reunification is good and well on its way. To bring the city together, there is talk of a single, unified airport for both Berlins. Sixteen years later, building begins and a prospective opening date is set for fall 2011.

Then it is postponed for June 2012.

But four weeks before the rescheduled opening, the building does not pass its safety inspection. The opening is pushed back until summer 2013, then to early 2014. By 2014, it is projected that the opening will be in 2017. As the original budget is surpassed by €5 billion1 and companies depending on the airport file for bankruptcy due to delays, Berliners and Germans alike become habituated to the never-ending public works blunder known as the BER airport.

By 2019, we came to believe that the airport would never open and that the never-used terminal would be demolished. There were rumours that it had been built with no light switches,2 that the roof was 2x too heavy for the structure,3 and that the planner was just a student. Some of this is likely untrue, but it engages a feeling of schadenfreude, defying the “always perfect” stereotype of German engineering.

This story might be a fun anecdote for your next (virtual) cocktail party. But it also highlights a very real problem: projects almost never go as planned. And Berlin is not alone: just google “planning delay” from your own city and you are likely to find a similar example.

This leads us to the very unsurprising conclusion that despite extensive experience with project planning, things often don’t go as planned. Projects start off with a group of highly talented and optimistic people, but budgets inevitably run over and timelines are extended.

That poses a question: Do delays have to be inevitable or is it possible to prevent these delays from happening in the future?

That’s where agile software development comes in. The agile methodology is a form of project management that rose out of software development in the early 2000s and is now applied by a wide variety of teams and departments (e.g. agile HR, agile product development, and so on). Many of the tenants of agile are validated by decades of scientific research coming out of the field of cognitive psychology and behavioural economics.4

The connection between group cognition and Transport for London

In 2017, the Behavioral Insights Team (BIT) from the UK was contracted to investigate the project management of London’s transport authority, and how time and costs could be better estimated and managed.5

The BIT’s in-depth analysis of London’s transport authority found that project management teams suffered from four cognitive biases that are exacerbated in group settings. Two of particular interest are:

  1. The planning fallacy
  2. Groupthink

This is interesting because conventional knowledge suggests that the benefit of teamwork is greater creativity, and hence more alternative ideas.

This case highlights the finite power of the brain even when many minds are brought together. (Note: Collective work is not always ineffective.)6 Humans rely on heuristics (mental shortcuts) to overcome our finite resources and these shortcuts often result in cognitive biases.7

For those managing projects both big and small, understanding how decision making is influenced by these mental shortcuts provides a better understanding of what can go wrong when estimating the timeline, budget, and success of a potential project.

Being optimistic isn’t bad. Is it?

In the example above about the BER airport disaster, we can see (in hindsight and as external viewers) that public works project managers seem to be unreasonably optimistic.

The planning fallacy describes this exact phenomenon, whereby people tend to underestimate the resources required for a project.8

Daniel Kahneman described in his 2012 book Thinking, Fast and Slow how this bias introduces unrealistic forecasts hinging on best-case scenarios.

Why does this happen?

It’s a matter of perspective-taking. When planning a new project, there is a strong tendency to take an “inside” perspective to our abilities: to focus on what makes up a project (internally) rather than looking at the external factors that will influence the delivery of the project tasks.8,9

People rarely take into account their own past experiences with similar tasks, instead focussing on the future expected outcomes…[they] anchor future outcomes on plans and available scenarios of success, rather than past results, which leads to overly optimistic predictions.


—Behavioral Insights Team, 20175

This, more often than not, leads to a failure to consider and identify the “unknown unknowns.”10

a. Remedy #1: Feedback

Taking an outside view is recommended to keep the planning fallacy in check. This involves referencing similar past projects to form a baseline for completion times and budget. A word of caution here: This may seem obvious, but as research finds, people often ignore this information entirely when planning.

It is therefore recommended to try and collect “well-designed feedback,” as the BIT puts it.11 This can be as simple as ensuring that failures are openly discussed and dissected, frequently. 

Agile implementation: The Agile Manifesto lists twelve principles. One of these is, “At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behavior accordingly.”12

Feedback is therefore an integral aspect of an agile team. Regular standup meetings happen weekly or more frequently and allow teammates to share progress, and to follow up on potential roadblocks.13

In the same light, “retrospectives” are bigger review sessions, like a built-in “breather” for the team.

b.  Remedy #2: Keeping score

Data collection is another way to avoid committing the planning fallacy. By keeping score on past performance, costs, and timing, better estimates for future projects can be gleaned. Keeping score doesn’t just consciously inform a team of their realistic performance. The way feedback is provided can also also activate unconscious biases in a positive way: The Behavioral Economics Team of Australia found that providing feedback to doctors on how they compared to other doctors on the rate of antibiotics they prescribed led to a 12.3% reduction in overall prescriptions.14

Agile implementation: Data collection is a built-in feature of agile management tools like Jira and Asana. Both allow a team to track the rate of task/issue completion in one sprint or week, depending on the format of your project.

c.  Remedy #3: Pre- and post-mortems

The BIT report mentioned above recommends something called a “pre-mortem” developed by the researcher Gary Klein. The concept is very similar to the retrospective, but a pre-mortem is held at the onset of project planning.

A pre-mortem involves bringing all stakeholders together in a workshop to imagine that the project has failed. Stakeholders are asked to consider and ideate reasons why the project has failed. Afterwards the different stories are shared and used to help inform others about the problems they may not have independently considered (thereby considering the “outside factors”).

Pre-mortems are so effective that they “can improve a person’s ability to correctly identify reasons for future outcomes by 30 percent.”15 This also can help overcome groupthink, which we will cover next.

Agile implementation: During the planning phase of a sprint or project, a pre-mortem exercise (sometimes called a “futurespective”) can be easily integrated as a formalized moment for the team to pause and reflect.

d.  Remedy #4: breaking a project down

The BIT also quoted more recent research into something called the segmentation effect as justification for setting aside time at the onset of a project to think about the individual tasks involved. Research from Forsyth and Burt found that when people were asked to estimate the time needed to complete a large task, the time they allocated to the large tasks was less than the summed total of what they allocated to their component parts.16

Agile implementation: Itemizing actions and tasks is a key aspect of the agile method. The planning phase of an agile team can take many forms, but usually the team members will all sit together, and estimate and vote on the size of component tasks. This makes it easier for the project manager to plan because then they have time and effort estimates from those who are actually doing the work, as well as data from historical task completion rates.

Groupthink

A peculiar outcome of group work is how a team’s output can be less than the sum of its parts. Group dynamics and the influence of a few can disable the normal process of discussion, questioning, and finally agreement in a way where conformity towards one view prevails instead of a full appraisal of all options.

What ends up happening is that groups cover common knowledge: Members discuss what they all know instead of questioning what people do not know.

A group is especially vulnerable to groupthink when its members are similar in background, when the group is insulated from outside opinions, and when there are no clear rules for decision making


—Irving Janis, 197217

a.  Remedy #5: Playing devil’s advocate

As we have learned, taking an outside view is harder than it seems. Therefore, an easy way to do this is not to do it at all.

Instead, assign someone in the team to roleplay as the devil’s advocate.18

The devil’s advocate strategy was recommended by the BIT for London’s transportation group to integrate into early team discussions and planning sessions. How it works: One person in the project team (or someone from outside the organization) role-plays a competitor, trying to find faults with the planning and to challenge any decisions.

The practice is backed by research: People do not underestimate others’ time to complete a task. Rather, they only overestimate their own abilities.9

Agile implementation: If you work with company-wide dashboards, many teams can observe different boards. By integrating project completion into a dashboard, this invites fresh eyes and constructive feedback. However a devil’s advocate is not standard practice. To add it, simply assign someone during sprint planning who is in charge of finding flaws in plans, and questions the escalation of commitment.

b.  Remedy #6: Being mindful of group dynamics

Another way to prevent groupthink is through minor changes to the way a group discusses ideas.

Irving Janis, the formative researcher on groupthink, recommended that leaders speak last. Even flat hierarchies can display problems of following the leader.

Agile implementation: One of the twelve agile principles is to take a hands-off approach to management: “Give them [team members] the environment and support they need, and trust them to get the job done.”

c.  Remedy #7: Pre-commitment and second chances

The problem with groupthink starts with the team. Therefore one way to overcome this is to bypass the team altogether, or at least in the beginning.

The BIT recommends asking team members to consider the task or project before the first meeting, in a maneuver called “pre-commitment.” Here people consider the issues and potential solutions on their own before they can be influenced by the group.

In the same line, after the initial planning meeting is held, offer a second chance: at a subsequent planning session, begin the session by allowing all team members to voice any reservations that they may have developed over the period from discussion to the second meeting.

Agile implementation: In terms of groupthink, the agile method encourages frequent team collaboration, thereby making it susceptible to the biases of group dynamics. However, when planning tasks, agile encourages individual brainstorming before ideas are shared in the group: The Jira backlog function acts as a low-commitment store for ideas that can be finalized later (like a built-in second chance feature). Furthermore, because issues are not automatically assigned to employees, it gives the chance for each member to individually consider the best way to achieve it on their own time.

Summary

Berlin’s BER airport finally opened for passengers on November 1st, 2020, nine years after originally planned. Delays in project planning seem to be a common occurrence, but the scale of the BER airport debacle is remains remarkable (the first days of operations were still met with complications like a leaky roof).19 

The good news is that if you are planning to go agile (or already are), you are likely already taking advantage of years of behavioral science research. That’s because the agile method is (indirectly) backed by peer-reviewed findings from the study of cognitive processes. By following the recommendations of agile, you can help to protect your team’s projects from bias.

Strategies to avoid the planning fallacy and groupthink:

1. Well-designed feedback: Meet with the team often and deliberate together. Share the good and bad of a project’s tasks and progress. Write it down and review it at the onset of future projects. Hold retrospectives at the end of the project.

2. Keep score: Integrate data-based tracking of task completion rates to use for future project planning.

3. Pre- and post-mortems: At the project onset ask the team to imagine themselves in the future and act as if the project has failed. Ideate on how and why this happened. At the project wrap up, review what actually happened and what was and was not expected. Keep note and use for future planning. 

4. Break down project to-dos into tasks that take between 30 minutes and 3 days: Extra effort and time spent planning how project sub-tasks can be broken down into itemized issues that take more than 30 min to complete, but no longer than 3 days, helps better estimate time required. Ask those working on the tasks to chip in on the estimation.

5. Assign a devil’s advocate: Offset groupthink by asking someone to play the contrarian during project planning meetings. 

6. Be mindful of group dynamics: Make a habit of leaders speaking last and hand over story estimation to the team.

7. Pre-commit but offer a second chance: Before the details of the “how” and “when” are decided, allow team members to consider the project on their own time. Then, allow for open discussion in a group. It’s even better if ideation can be done anonymously. In the same line, after the initial meeting, provide a follow-up where people can voice second guesses.