COVID-19 and the Science of Risk Perception

Throughout this coronavirus pandemic, mainstream media, national governments, and official health organizations have been broadly united in their recognition of COVID-19 as a serious threat to public health. This apparent consensus, however, belies the level of disagreement within national populations.

From conspiracy theorists who reject the very existence of the virus on one end,6 to people suffering from the debilitating effects of COVID-19-related health anxiety on the other,7 people’s perceptions of the risk posed by COVID-19 vary enormously.

As governments try to balance controlling the spread of the virus against keeping their economies moving, rates of infection are steadily rising. Controlling infection while keeping riskier sectors of the economy open (such as hospitality) depends in large part on public compliance with behavioral measures designed to control the virus.

While behavioral science has uncovered many determinants of behavior beyond beliefs, attitudes, and intentions, it remains true that people who perceive lower risk from a hazard devote less energy to mitigating that risk. This has been borne out in recent research showing that people engage more in protective behaviors, such as handwashing and physical distancing, as their perceptions of COVID-19 health risks go up.4

This highlights why it’s important that risk perceptions towards COVID-19 aren’t radically out of step with the best available science. Highly skewed perceptions of COVID-19 risk within a significant portion of a population could undermine efforts to keep infection rates under control.

What factors shape COVID-19 risk perceptions?

Policymakers rely on formal criteria for measuring risk. Where infectious disease is concerned, this typically involves multiplying the probability of infection by some measure of its negative effects on health. The rest of us form judgments about risk through a messy combination of cognitive, emotional, social, and cultural processes. These can yield outcomes very different to those produced by formal assessments.

Harnessing insight into these informal processes, a recent study sought to pinpoint how much each of a suite of potential factors may be driving disagreement in COVID-19 risk perceptions in the populations of 10 countries. Countries included were as culturally and geographically diverse as the United Kingdom, United States, Sweden, Mexico, and Japan.4

Certain findings from this study are unlikely to raise many eyebrows. People who reported having had the virus, for instance, judged it to pose greater risk than those who didn’t, while those who trusted scientists and medical professionals more also perceived greater risk on average.

Of all factors considered, however, what explained the largest amount of “variance” (or variation) in COVID-19 risk perceptions was how individualistic versus communitarian people were in their political outlooks, measured by their level of agreement with the statement, “the government interferes far too much in our everyday lives.”

This means that if you want to predict someone’s perceptions of COVID-19 risk—which in this study concerned judgments about the chance of becoming infected and the seriousness of disease symptoms—the most important thing to know is their overall attitude towards government intervention in daily life.

COVID-19 control measures such as enforced physical distancing, mask-wearing, and strict business regulation are precisely the kinds of “interference” individualists might be expected to disapprove of, and communitarians welcome (provided their communities as a whole—whether local, regional, or national—could benefit).

What isn’t so clear is why individualism/communitarianism should predict beliefs about infectiousness and symptom severity—facts which stand independent of political preferences.

What, then, might explain this finding?

Motivated reasoning and risk perception

A theory known as the cultural cognition of risk offers a possible explanation. Grounded in the psychology of risk perception, it states that people evaluate risk-relevant information in ways that affirm their pre-existing “cultural worldviews,” of which their individualism or communitarianism is a defining feature. This theory has been most profitably applied to explaining differences in people’s perceptions of environmental risks; most notably, climate change.8

Individualists are said by cultural cognition theory to implicitly recognize that climate change risk legitimizes the sort of government “interference” (e.g. taxes on high-carbon vehicles) that they dislike. This in turn makes them less accepting of information that credits this risk. Communitarians, on the other hand, are expected to be sensitive to climate change risk precisely because it invites the sorts of restrictions that can help keep communities safe. Leaving the solution of collective problems to profit-seeking enterprises is anathema to the communitarian worldview.

To test this theorized link between worldviews and risk perceptions, one study had individualists and communitarians rate the validity of a report stressing the risks of climate change under two conditions: 1) when this report went on to recommend geoengineering—an industry-led initiative—as the optimal solution, and 2) when it recommended government-imposed caps on carbon emissions as the best way to reduce the risks.9

As expected, individualists who read the report recommending emission caps were more skeptical of the information on climate change risk than individualists who read the version advocating geoengineering. The opposite pattern of findings was found for communitarians, who were more skeptical of climate change risk information in the geoengineering condition. As predicted by cultural cognition, this suggests that when the actions thought to follow from crediting a particular risk are more hostile to our worldviews, we are less inclined to credit that risk.

This is not to say that people cynically look ahead to the practical implications of acknowledging a certain risk, review the compatibility of these implications with their political commitments, then consciously adjust their risk perceptions accordingly. Rather, several features of psychologically normal information processing prime our minds for motivated reasoning.10

One such feature is our biased assimilation of information. We disproportionately seek out, notice, and remember information that supports our pre-existing beliefs, attitudes, and values.11 This bias in how we filter information magnifies arguments and evidence that fit with our worldviews, while minimizing those in tension.

Related to this is the phenomenon of motivated skepticism. This is our propensity to implicitly counter-argue information that threatens our worldview, while accepting uncritically information that supports it.13

It’s easy to see how these two features of cognition—biased assimilation and motivated skepticism—could interact with what’s been referred to as the “infodemic”14 of conflicting information on COVID-19 online in such a way that people’s view of the facts ends up aligning with their overriding worldviews.

This is further fuelled by our tendency to find arguments and evidence more credible when we believe that the person or institution delivering them shares our worldviews.3 After all, these are, by virtue of sharing our worldviews, the very people most likely to tell us what we want to hear, reinforcing the effects of biased assimilation and motivated skepticism.

All this is to say that when we encounter contradictory claims about COVID-19, we will automatically attend to, less critically accept, and better remember information that fits with our overarching attitudes. In turn, we are drawn to information that undermines the rationale for responses we find politically unpalatable. This is particularly true where worldview-affirming information comes from a likeminded source, which it typically will.

For the committed individualist, these could be arguments that downplay the health risks of COVID-19 by drawing misleading comparisons to the regular flu, or conspiracy theories which claim that fatality statistics have been artificially inflated to further vested interests.5 Judgments made about COVID-19 risk informed by this information would then legitimize resistance to the very behavioral measures that motivated these judgments in the first place, completing a sequence of events that brings worldview, perception, and behavior into alignment.

Communicating risk across political divides

All this begs the question, what can be done to communicate COVID-19 risk in ways that avoids triggering political sensitivities?

One strategy shown to be effective is framing information in ways that affirm, rather than undermine, important political values.2 Where individualists are concerned, this might mean highlighting ways in which the health impacts of COVID-19 can diminish people’s ability to live life on their own terms. Hospitalization and illness can be greater limiters of self-determination than many of the behavioral measures countries have implemented to limit the spread of the virus. By drawing attention to this, the effects of COVID-19 could be framed as a threat to individualist values without needing to distort the facts.

Another strategy would be to ensure that accurate COVID-19 risk information is communicated by people and institutions with a broad range of political credentials. If risk communicators are seen to be biased—perhaps because they’re thought to represent only a single group—distrust is likely to follow. Information communicated by diverse voices should find the widest reach.

It’s particularly important that these principles are applied by the fact-checking organizations who are tasked with dismantling the deluge of misinformation about COVID-19. The vital role these organizations play in decontaminating our risk communication environment can succeed only to the extent that they’re trusted. And trust will only be invested by politically diverse publics if fact-checkers maintain a reputation of political neutrality. Inevitably, all effective fact-checkers must sometimes find fault with claims congenial to the worldviews of certain groups. Thanks to the dynamics of motivated reasoning, this risks losing the trust of these groups. It is essential, therefore, that fact-checkers are seen to be even-handed in their scrutiny of claims made across the political spectrum.

It’s also worth exploring how alternative models of fact-checking might raise trust without compromising accuracy. For example, one study had three expert fact-checkers and a politically diverse sample of laypeople rate the accuracy of different information about COVID-19.1 The results showed that panels of 10 politically-balanced laypeople tended to agree with the expert fact-checkers about as much as the experts agreed with one another, suggesting that the fact-checking of non-expert panels aligns well with expert opinion.

Even more striking was that while experts conducted deep dives into the original articles presenting this information, non-expert panels made their ratings based only on article headlines and lede sentences. This suggests that by leveraging the “wisdom of crowds,” relatively accurate fact-checking could be achieved quickly and cheaply by members of the public, simultaneously sidestepping the problem of perceived bias and speeding up fact-checking such that it matches the scale of the COVID-19 infodemic that besets us.

However effective our fact-checking systems, some misinformation will inevitably slip through the net. This is where social media companies have a responsibility to encourage their users to exercise discernment when sharing information about COVID-19 on their platforms. A recent study found that getting people to briefly reflect on the accuracy of an unrelated headline substantially increased the ratio of true vs. false headlines about COVID-19 they were then willing to share.12 This was before they were told which were true and which were false. The researchers speculate that social media users often choose what to share based on goals other than accuracy, such as obtaining “likes” and other positive reinforcement. Users nevertheless do care about accuracy, which is why when nudged to reflect on it, they are less likely to share content they suspect might be untrue.

Incorporating a feature on these platforms that reminds users to consider the likely (in)accuracy of the content they see should help mitigate the sharing of false information, particularly where this might otherwise be motivated by a desire to gain positive reinforcement from others who share our political outlook.

Concluding remarks

It’s not possible to eliminate bias from human reasoning, nor should we wish to leave politics out of risk management. Judgments on how much we should care about a given issue are necessarily shaped by our personal values, which must also be central to decisions about where and to what extent we’re willing to make sacrifices in managing certain risks. Issues can arise, however, when we are forced to reason in a risk communication environment in which misinformation abounds. Here, motivated reasoning can unconsciously push us towards conclusions on matters of fact that diverge sharply from what the science tells us. When our understanding of relevant facts are skewed, we’re less able to act in ways that best protect the things we care about, whether that’s community wellbeing or individual freedom.

It’s critical, therefore, that political leaders and public health organizations take account of the science of risk perception when developing communication strategies, particularly when dealing with a global pandemic. If carefully designed and evidence-based, these could short-circuit the distorting processes of politically motivated reasoning before they gain traction. There’s also more that social media companies could do to help combat the current COVID-19 infodemic, from crowd sourcing fact-checking processes to nudging their users to pay greater attention to the (in)accuracy of the content they’re inclined to share.

These efforts to better communicate COVID-19 risk could help deescalate growing political polarization in people’s beliefs about the virus, bringing us closer to a common understanding that will enable a more coordinated, and ultimately more effective, response.

Read Next

Hand washing to increase hygiene
Perspective

Evidence-Based Strategies For Washing Your Hands

Beyond social distancing measures, health officials have recommended frequent hand washing to help limit the spread of COVID-19. But how can we effectively communicate these recommendations to those around us? As it turns out, behavioral science can offer evidence-based strategies to improve hand hygiene.

Perspective

Communicating During The Coronavirus

The coronavirus has brought many changes to the ways in which we communicate: video chats have replaced coffee chats, and stuffy lecture halls have been swapped for group calls, to name a few. In this article, Preeti Kotamarthi uses behavioral science concepts to recommend some ways for organizations to communicate better during (and perhaps after) the pandemic.