COVID-19 and the Science of Risk Perception
Throughout this coronavirus pandemic, mainstream media, national governments, and official health organizations have been broadly united in their recognition of COVID-19 as a serious threat to public health. This apparent consensus, however, belies the level of disagreement within national populations.
From conspiracy theorists who reject the very existence of the virus on one end,6 to people suffering from the debilitating effects of COVID-19-related health anxiety on the other,7 people’s perceptions of the risk posed by COVID-19 vary enormously.
As governments try to balance controlling the spread of the virus against keeping their economies moving, rates of infection are steadily rising. Controlling infection while keeping riskier sectors of the economy open (such as hospitality) depends in large part on public compliance with behavioral measures designed to control the virus.
While behavioral science has uncovered many determinants of behavior beyond beliefs, attitudes, and intentions, it remains true that people who perceive lower risk from a hazard devote less energy to mitigating that risk. This has been borne out in recent research showing that people engage more in protective behaviors, such as handwashing and physical distancing, as their perceptions of COVID-19 health risks go up.4
This highlights why it’s important that risk perceptions towards COVID-19 aren’t radically out of step with the best available science. Highly skewed perceptions of COVID-19 risk within a significant portion of a population could undermine efforts to keep infection rates under control.
What factors shape COVID-19 risk perceptions?
Policymakers rely on formal criteria for measuring risk. Where infectious disease is concerned, this typically involves multiplying the probability of infection by some measure of its negative effects on health. The rest of us form judgments about risk through a messy combination of cognitive, emotional, social, and cultural processes. These can yield outcomes very different to those produced by formal assessments.
Harnessing insight into these informal processes, a recent study sought to pinpoint how much each of a suite of potential factors may be driving disagreement in COVID-19 risk perceptions in the populations of 10 countries. Countries included were as culturally and geographically diverse as the United Kingdom, United States, Sweden, Mexico, and Japan.4
Certain findings from this study are unlikely to raise many eyebrows. People who reported having had the virus, for instance, judged it to pose greater risk than those who didn’t, while those who trusted scientists and medical professionals more also perceived greater risk on average.
Of all factors considered, however, what explained the largest amount of “variance” (or variation) in COVID-19 risk perceptions was how individualistic versus communitarian people were in their political outlooks, measured by their level of agreement with the statement, “the government interferes far too much in our everyday lives.”
This means that if you want to predict someone’s perceptions of COVID-19 risk—which in this study concerned judgments about the chance of becoming infected and the seriousness of disease symptoms—the most important thing to know is their overall attitude towards government intervention in daily life.
COVID-19 control measures such as enforced physical distancing, mask-wearing, and strict business regulation are precisely the kinds of “interference” individualists might be expected to disapprove of, and communitarians welcome (provided their communities as a whole—whether local, regional, or national—could benefit).
What isn’t so clear is why individualism/communitarianism should predict beliefs about infectiousness and symptom severity—facts which stand independent of political preferences.
What, then, might explain this finding?
Motivated reasoning and risk perception
A theory known as the cultural cognition of risk offers a possible explanation. Grounded in the psychology of risk perception, it states that people evaluate risk-relevant information in ways that affirm their pre-existing “cultural worldviews,” of which their individualism or communitarianism is a defining feature. This theory has been most profitably applied to explaining differences in people’s perceptions of environmental risks; most notably, climate change.8
Individualists are said by cultural cognition theory to implicitly recognize that climate change risk legitimizes the sort of government “interference” (e.g. taxes on high-carbon vehicles) that they dislike. This in turn makes them less accepting of information that credits this risk. Communitarians, on the other hand, are expected to be sensitive to climate change risk precisely because it invites the sorts of restrictions that can help keep communities safe. Leaving the solution of collective problems to profit-seeking enterprises is anathema to the communitarian worldview.
To test this theorized link between worldviews and risk perceptions, one study had individualists and communitarians rate the validity of a report stressing the risks of climate change under two conditions: 1) when this report went on to recommend geoengineering—an industry-led initiative—as the optimal solution, and 2) when it recommended government-imposed caps on carbon emissions as the best way to reduce the risks.9
As expected, individualists who read the report recommending emission caps were more skeptical of the information on climate change risk than individualists who read the version advocating geoengineering. The opposite pattern of findings was found for communitarians, who were more skeptical of climate change risk information in the geoengineering condition. As predicted by cultural cognition, this suggests that when the actions thought to follow from crediting a particular risk are more hostile to our worldviews, we are less inclined to credit that risk.
This is not to say that people cynically look ahead to the practical implications of acknowledging a certain risk, review the compatibility of these implications with their political commitments, then consciously adjust their risk perceptions accordingly. Rather, several features of psychologically normal information processing prime our minds for motivated reasoning.10
One such feature is our biased assimilation of information. We disproportionately seek out, notice, and remember information that supports our pre-existing beliefs, attitudes, and values.11 This bias in how we filter information magnifies arguments and evidence that fit with our worldviews, while minimizing those in tension.
Related to this is the phenomenon of motivated skepticism. This is our propensity to implicitly counter-argue information that threatens our worldview, while accepting uncritically information that supports it.13
It’s easy to see how these two features of cognition—biased assimilation and motivated skepticism—could interact with what’s been referred to as the “infodemic”14 of conflicting information on COVID-19 online in such a way that people’s view of the facts ends up aligning with their overriding worldviews.
This is further fuelled by our tendency to find arguments and evidence more credible when we believe that the person or institution delivering them shares our worldviews.3 After all, these are, by virtue of sharing our worldviews, the very people most likely to tell us what we want to hear, reinforcing the effects of biased assimilation and motivated skepticism.
All this is to say that when we encounter contradictory claims about COVID-19, we will automatically attend to, less critically accept, and better remember information that fits with our overarching attitudes. In turn, we are drawn to information that undermines the rationale for responses we find politically unpalatable. This is particularly true where worldview-affirming information comes from a likeminded source, which it typically will.
For the committed individualist, these could be arguments that downplay the health risks of COVID-19 by drawing misleading comparisons to the regular flu, or conspiracy theories which claim that fatality statistics have been artificially inflated to further vested interests.5 Judgments made about COVID-19 risk informed by this information would then legitimize resistance to the very behavioral measures that motivated these judgments in the first place, completing a sequence of events that brings worldview, perception, and behavior into alignment.
Communicating risk across political divides
All this begs the question, what can be done to communicate COVID-19 risk in ways that avoids triggering political sensitivities?
One strategy shown to be effective is framing information in ways that affirm, rather than undermine, important political values.2 Where individualists are concerned, this might mean highlighting ways in which the health impacts of COVID-19 can diminish people’s ability to live life on their own terms. Hospitalization and illness can be greater limiters of self-determination than many of the behavioral measures countries have implemented to limit the spread of the virus. By drawing attention to this, the effects of COVID-19 could be framed as a threat to individualist values without needing to distort the facts.
Another strategy would be to ensure that accurate COVID-19 risk information is communicated by people and institutions with a broad range of political credentials. If risk communicators are seen to be biased—perhaps because they’re thought to represent only a single group—distrust is likely to follow. Information communicated by diverse voices should find the widest reach.
It’s particularly important that these principles are applied by the fact-checking organizations who are tasked with dismantling the deluge of misinformation about COVID-19. The vital role these organizations play in decontaminating our risk communication environment can succeed only to the extent that they’re trusted. And trust will only be invested by politically diverse publics if fact-checkers maintain a reputation of political neutrality. Inevitably, all effective fact-checkers must sometimes find fault with claims congenial to the worldviews of certain groups. Thanks to the dynamics of motivated reasoning, this risks losing the trust of these groups. It is essential, therefore, that fact-checkers are seen to be even-handed in their scrutiny of claims made across the political spectrum.
It’s also worth exploring how alternative models of fact-checking might raise trust without compromising accuracy. For example, one study had three expert fact-checkers and a politically diverse sample of laypeople rate the accuracy of different information about COVID-19.1 The results showed that panels of 10 politically-balanced laypeople tended to agree with the expert fact-checkers about as much as the experts agreed with one another, suggesting that the fact-checking of non-expert panels aligns well with expert opinion.
Even more striking was that while experts conducted deep dives into the original articles presenting this information, non-expert panels made their ratings based only on article headlines and lede sentences. This suggests that by leveraging the “wisdom of crowds,” relatively accurate fact-checking could be achieved quickly and cheaply by members of the public, simultaneously sidestepping the problem of perceived bias and speeding up fact-checking such that it matches the scale of the COVID-19 infodemic that besets us.
However effective our fact-checking systems, some misinformation will inevitably slip through the net. This is where social media companies have a responsibility to encourage their users to exercise discernment when sharing information about COVID-19 on their platforms. A recent study found that getting people to briefly reflect on the accuracy of an unrelated headline substantially increased the ratio of true vs. false headlines about COVID-19 they were then willing to share.12 This was before they were told which were true and which were false. The researchers speculate that social media users often choose what to share based on goals other than accuracy, such as obtaining “likes” and other positive reinforcement. Users nevertheless do care about accuracy, which is why when nudged to reflect on it, they are less likely to share content they suspect might be untrue.
Incorporating a feature on these platforms that reminds users to consider the likely (in)accuracy of the content they see should help mitigate the sharing of false information, particularly where this might otherwise be motivated by a desire to gain positive reinforcement from others who share our political outlook.
Concluding remarks
It’s not possible to eliminate bias from human reasoning, nor should we wish to leave politics out of risk management. Judgments on how much we should care about a given issue are necessarily shaped by our personal values, which must also be central to decisions about where and to what extent we’re willing to make sacrifices in managing certain risks. Issues can arise, however, when we are forced to reason in a risk communication environment in which misinformation abounds. Here, motivated reasoning can unconsciously push us towards conclusions on matters of fact that diverge sharply from what the science tells us. When our understanding of relevant facts are skewed, we’re less able to act in ways that best protect the things we care about, whether that’s community wellbeing or individual freedom.
It’s critical, therefore, that political leaders and public health organizations take account of the science of risk perception when developing communication strategies, particularly when dealing with a global pandemic. If carefully designed and evidence-based, these could short-circuit the distorting processes of politically motivated reasoning before they gain traction. There’s also more that social media companies could do to help combat the current COVID-19 infodemic, from crowd sourcing fact-checking processes to nudging their users to pay greater attention to the (in)accuracy of the content they’re inclined to share.
These efforts to better communicate COVID-19 risk could help deescalate growing political polarization in people’s beliefs about the virus, bringing us closer to a common understanding that will enable a more coordinated, and ultimately more effective, response.
References
- Allen, J. N. L., Arechar, A. A., Pennycook, G., & Rand, D. G. (2020, October 1). Scaling Up Fact-Checking Using the Wisdom of Crowds. https://doi.org/10.31234/osf.io/9qdza
- Barker, D. C. (2005). Values, Frames, and Persuasion in Presidential Nomination Campaigns. Political Behavior, 27, 375–394.
- Chebat, J., & Filiatrault, P. (1987). Credibility, source identification and message acceptance: The case of political persuasion. Political Communication, 4(3), 153–160.
- Dryhurst, S., Schneider, C. R., Kerr, J., Freeman, A. L. J., Recchia, G., van der Bles, A. M., Spiegelhalter, D., & van der Linden, S. (2020). Risk perceptions of COVID-19 around the world. Journal of Risk Research, 1–13.
- Giles, C., & Robinson, O. (2020). Coronavirus: The US has not reduced its Covid-19 death toll to 6% of total. https://www.bbc.co.uk/news/world-us-canada-53999403
- Grudz, A., & Mai, P. (2020). Conspiracy theorists are falsely claiming that the coronavirus pandemic is an elaborate hoax. https://theconversation.com/conspiracy-theorists-are-falsely-claiming-that-the-coronavirus-pandemic-is-an-elaborate-hoax-135985
- Jungmann, S. M., & Witthöft, M. (2020). Health anxiety, cyberchondria, and coping in the current COVID-19 pandemic: Which factors are related to coronavirus anxiety?. Journal of anxiety disorders, 73,
- Kahan, D. M. (2012). Cultural Cognition as a Conception of the Cultural Theory of Risk. In S. Roeser, R. Hillerbrand, P. Sandin, & M. Peterson (Eds.), Handbook of Risk Theory: Epistemology, Decision Theory, Ethics and Social Implications of Risk (pp. 725-759). London: Springer.
- Kahan, D. M., Jenkins-Smith, H., Tarantola, T., Silva, C. L., & Braman, D. (2015). Geoengineering and Climate Change Polarization. The ANNALS of the American Academy of Political and Social Science, 658(1), 192–222.
- Kunda, Z. (1990). The case for motivated reasoning. Psychological bulletin, 108(3), 480-98 .
- Lord, C. G., & Taylor, C. A. (2009). Biased assimilation: Effects of assumptions and expectations on the interpretation of new evidence. Social and Personality Psychology Compass, 3(5), 827–841.
- Pennycook, G., McPhetres, J., Zhang, Y., Lu, J.G., Rand, D.G. (2020). Fighting COVID-19 Misinformation on Social Media: Experimental Evidence for a Scalable Accuracy-Nudge Intervention. Psychological Science, 31(7), 770-780.
- Taber, C., & Lodge, M. (2006). Motivated Skepticism in the Evaluation of Political Beliefs. American Journal of Political Science, 50(3), 755-769.
- World Health Organisation. (2020). Managing the COVID-19 infodemic: Promoting healthy behaviours and mitigating the harm from misinformation and disinformation. https://www.who.int/news-room/detail/23-09-2020-managing-the-covid-19-infodemic-promoting-healthy-behaviours-and-mitigating-the-harm-from-misinformation-and-disinformation
About the Author
Joshua Bromley
Joshua Bromley is a freelance writer focused on communicating insights from behavioural science and positive psychology. He has a particular interest in how findings from these fields can be applied to business practice and public policy to improve outcomes. He holds a PhD in social psychology from Cardiff University (UK). His doctoral thesis examined the factors which shape people’s perceptions of societal risks, ranging from climate change to terrorism.
About us
We are the leading applied research & innovation consultancy
Our insights are leveraged by the most ambitious organizations
“
I was blown away with their application and translation of behavioral science into practice. They took a very complex ecosystem and created a series of interventions using an innovative mix of the latest research and creative client co-creation. I was so impressed at the final product they created, which was hugely comprehensive despite the large scope of the client being of the world's most far-reaching and best known consumer brands. I'm excited to see what we can create together in the future.
Heather McKee
BEHAVIORAL SCIENTIST
GLOBAL COFFEEHOUSE CHAIN PROJECT
OUR CLIENT SUCCESS
$0M
Annual Revenue Increase
By launching a behavioral science practice at the core of the organization, we helped one of the largest insurers in North America realize $30M increase in annual revenue.
0%
Increase in Monthly Users
By redesigning North America's first national digital platform for mental health, we achieved a 52% lift in monthly users and an 83% improvement on clinical assessment.
0%
Reduction In Design Time
By designing a new process and getting buy-in from the C-Suite team, we helped one of the largest smartphone manufacturers in the world reduce software design time by 75%.
0%
Reduction in Client Drop-Off
By implementing targeted nudges based on proactive interventions, we reduced drop-off rates for 450,000 clients belonging to USA's oldest debt consolidation organizations by 46%