A New SPIN on Misinformation

read time - icon

2 min read

May 23, 2024

We all stretch the truth from time to time.1 The real problem is when our lies spread to thousands of people, assisted by recent technological advancements such as social media or artificial intelligence.2, 3 This has a real impact on the decisions people make—such as who to vote for or whether to get vaccinated.

You’re probably already familiar with this phenomenon—it’s called misinformation: the dissemination of false or misleading information.4, 5, 6

Our latest research at The Decision Lab identified and organized the different types of misinformation into a taxonomy called Sorting Potentially Inaccurate Narratives (SPIN). We hope this tool can help both individuals and organizations combat misinformation to make the best decisions possible. 

Spin Misinformation Diagram

What is a Misinformation Taxonomy?

A taxonomy is a system of classification based on specific guidelines. In particular, misinformation taxonomies attempt to achieve two things:

  1. List all the relevant types of misinformation
  2. Organize misinformation based on criteria

The ultimate goal behind these taxonomies is to guide future interventions to teach participants how to identify and, in turn, combat misinformation.


Most misinformation taxonomies in the past have been tailored to specific fields—such as education or politics—to create solutions that directly address problems within that context.7, 8, 9, 10, 11 In contrast, our goal was for our taxonomy to include as many different types of misinformation out there as possible. This way, we could determine which interventions work in some situations but not others.

Building a taxonomy

After reviewing the literature, we gathered an initial list of misinformation types from two of the most comprehensive taxonomies we could find by Kapantai et al and Kozyreva et al. Although both papers originally cut out any terms irrelevant to their respective contexts, we decided to add these terms back in to make our list as exhaustive as possible. Afterward, we turned to ML-GPT to fill in any missing gaps, resulting in 63 terms in total.

We then removed 12 terms based on the following three rules (along with examples of how they were applied).

  1. Remove terms encompassed by other, broader terms. We took out “highly partisan news site” since it was already implied by “biased/one-sided.” 
  2. Remove terms that don’t describe content.  We got rid of “bullying” from our list since it doesn’t indicate what’s being shared, just how it’s being shared.
  3. Combine similar terms into one, keeping the more popular title. We condensed “cherrypicking” and “bias in fact selection” to be just “cherrypicking” since this is what it’s most commonly called.

This left us with a finalized list of 51 terms, as you can see below.

Now that we had our list, it was time to organize them. From the start, we wanted to create categories that would be most helpful and relevant for informing future interventions. For example, if designing an intervention to encourage participants to verify whether a fact is correct or not, it’s important to know how easily “verifiable” each type of misinformation is first.

What did we come up with?

We came up with three broader categories—psychological, content, and source—each with its own three subdimensions. 

Spin Misinformation Tree Diagram

Then, we sorted each of our 51 types of information into these subdimensions. This is what the first five on our list looked like. (Be sure to check out the full taxonomy here!)

To make sense of all of these categories and subdimension, let’s take a closer look at one row in the chart: clickbait, or misleading headlines designed to encourage users to click on a link.

Psychological

This category addresses an individual’s motivation for sharing the information—including their thoughts, emotions, and attitudes. The three psychological dimensions we decided upon were intentionality, profit, and ideological.

  1. Intentionality: Is the person purposely lying? Terms could be categorized as either intentional or unintentional. Since clickbait is deliberately exaggerated to try to get you to click on it, we categorized it as intentional. 
  2. Profit: Is the person sharing this information to make money? Terms could be categorized as either yes or no. News websites use clickbait to drive traffic and, in turn, make a profit, leading us to categorize this term as yes.
  3. Ideological: Is the person sharing this information motivated by political views or personal values? Terms could be categorized as either yes or no. Although clickbait might use controversial opinions to catch your attention, the underlying motivation is not always politically driven, so we categorized this term as no

Content

This category addresses the characteristics of the content being shared. The three content-based dimensions we decided upon were format, manipulation, and facticity.

  1. Format: How much information is being provided? Terms could be categorized as either rich or shallow. Since clickbait only includes the title or thumbnail of an article or video, we categorized this term as shallow.
  2. Manipulation: How is the person manipulating the content? Terms could be subcategorized as created, distorted, or reshared. Clickbait is neither created from scratch nor simply “reshared.” Instead, entities typically twist a story, so we categorized this term as distorted.  
  3. Facticity: How true do the statements tend to be? Terms could be categorized as false, mostly true, or mixed. Since clickbait combines true aspects of a story with exaggerated or even fabricated details, we categorized this term as mixed.

Source

This category describes the players at hand, including both those sharing the information and those receiving it. The three source-based dimensions we settled on were audience, verifiability, and agent.

  1. Audience: How large does the audience tend to be? Terms could be categorized as either one-to-many or one-to-one. Articles and videos are shared on the internet with thousands of potential viewers, so we categorized clickbait as one-to-many.
  2. Verifiability: Can someone quickly and easily check how truthful the statement is? Terms could be categorized as either yes or no. After clicking on the article or video, it’s usually pretty clear that it’s loosely based on reality. With this in mind, we categorized clickbait as yes.
  3. Agent: Who is the information coming from? Terms could be categorized as either institutions or individuals. Since clickbait is usually published by an organization, we identified the agent as institutions.

Any time a dimension wasn’t applicable to a particular term, we answered “NA” or “not applicable.”

So what?

What went well

We achieved our original objective of creating an exhaustive list of as many types of false or misleading information as possible. This overcame the shortcomings of previous taxonomies by including a wide range of terms spanning several sectors. For instance, “discredited research” might be a term more relevant to academia, while “fake news” might have more political implications. We also accounted for both traditional sources of misinformation, such as “urban legends,” while incorporating novel sources from technological advancements, such as “deepfakes.”

What to work on

Of course, this is not to say that our taxonomy is perfect. The unfortunate reality is that it was impossible for us to include every single type of misinformation due to the sheer volume of words out there. It’s also worth mentioning that each term we included has a wide variety of definitions and examples. This made it difficult to categorize broader terms—like “bogus,” “false information,” or “manipulation”—where various instances may even contradict each other. However, this problem is not specific to our research but is inherent to any misinformation taxonomy, making constant updating and refining even more important.

Where to go from here

As we previously discussed, the ultimate purpose of this taxonomy is to help design future interventions to help participants identify types of misinformation. This could include both pre-existing interventions—such as the bad news game that teaches participants how to spot fake news—as well as designing new interventions to directly combat specific types of misinformation.

Until then, we hope to make this taxonomy as accessible as possible so that anyone can use it as a tool for identifying misinformation. This way, we can all start to make more informed decisions in our lives.

Are you passionate about combating misinformation through behavioral science? The Decision Lab is eager to collaborate with researchers and practitioners to further develop and refine interventions using our SPIN taxonomy. Get in touch today to help us in our mission to create a more informed world!

References

  1. Sai, L., Shang, S., Tay, C., Liu, X., Sheng, T., Fu, G., ... & Lee, K. (2021). Theory of mind, executive function, and lying in children: a meta‐analysis. Developmental Science, 24(5), e13096.
  2. Kaiser, J., & Rauchfleisch, A. (2018). Unite the right? How YouTube’s recommendation algorithm connects the US far-right. D&S Media Manipulation.
  3. Tufekci, Z. (2018). YouTube, the great radicalizer. The New York Times, 10(3), 2018.Van der Linden, S., Leiserowitz, A., Rosenthal, S., & Maibach, E. (2017). Inoculating the public against misinformation about climate change. Global challenges, 1(2), 1600008.
  4. Lewandowsky, S., Ecker, U. K., Seifert, C. M., Schwarz, N., & Cook, J. (2012). Misinformation and its correction: Continued influence and successful debiasing. Psychological science in the public interest, 13(3), 106-131.
  5. Shao, C., Ciampaglia, G. L., Varol, O., Yang, K. C., Flammini, A., & Menczer, F. (2018). The spread of low-credibility content by social bots. Nature communications, 9(1), 1-9.
  6. Vosoughi, S., Roy, D., & Aral, S. (2018). The spread of true and false news online. science, 359(6380), 1146-1151.
  7. Kapantai, E., Christopoulou, A., Berberidis, C., & Peristeras, V. (2021). A systematic literature review on disinformation: Toward a unified taxonomical framework. New media & society, 23(5), 1301-1326.
  8. Kozyreva, A., Lewandowsky, S., & Hertwig, R. (2020). Citizens versus the internet: Confronting digital challenges with cognitive tools. Psychological Science in the Public Interest, 21(3), 103-156.
  9. Molina M, Sundar S, Le T, et al. (2019) “Fake news” is not simply false information: a concept explication and taxonomy of online content. American Behavioral Scientist. Epub ahead of print 14 October. DOI: 10.1177/0002764219878224.
  10. Rojecki, A., & Meraz, S. (2016). Rumors and factitious informational blends: The role of the web in speculative politics. New Media & Society, 18(1), 25-43.
  11. Wardle, C., & Derakhshan, H. (2017). Information disorder: Toward an interdisciplinary framework for research and policymaking (Vol. 27, pp. 1-107). Strasbourg: Council of Europe.
  12. Roozenbeek, J. (2019). Fake news game confers psychological resistance against online misinformation. Palgrave Communications, 5(1), 1-10. https://doi.org/10.1057/s41599-019-0279-9

About the Authors

Blue lines and dots forming an abstract brain shape, symbolizing technology and connection, on a white background.

The Decision Lab

The Decision Lab is a Canadian think-tank dedicated to democratizing behavioral science through research and analysis. We apply behavioral science to create social good in the public and private sectors.

About us

We are the leading applied research & innovation consultancy

Our insights are leveraged by the most ambitious organizations

Image

I was blown away with their application and translation of behavioral science into practice. They took a very complex ecosystem and created a series of interventions using an innovative mix of the latest research and creative client co-creation. I was so impressed at the final product they created, which was hugely comprehensive despite the large scope of the client being of the world's most far-reaching and best known consumer brands. I'm excited to see what we can create together in the future.

Heather McKee

BEHAVIORAL SCIENTIST

GLOBAL COFFEEHOUSE CHAIN PROJECT

OUR CLIENT SUCCESS

$0M

Annual Revenue Increase

By launching a behavioral science practice at the core of the organization, we helped one of the largest insurers in North America realize $30M increase in annual revenue.

0%

Increase in Monthly Users

By redesigning North America's first national digital platform for mental health, we achieved a 52% lift in monthly users and an 83% improvement on clinical assessment.

0%

Reduction In Design Time

By designing a new process and getting buy-in from the C-Suite team, we helped one of the largest smartphone manufacturers in the world reduce software design time by 75%.

0%

Reduction in Client Drop-Off

By implementing targeted nudges based on proactive interventions, we reduced drop-off rates for 450,000 clients belonging to USA's oldest debt consolidation organizations by 46%

Read Next

Insight

Supporting Mental Health on College Campuses

College students are struggling with rising mental health challenges, from overwhelming academic pressure to long wait times for counseling services. While many universities are scaling up support, traditional approaches often fail to reach students who don’t seek formal therapy. By leveraging behavioral science, universities can implement scalable solutions—like stepped care models, resilience programs, and peer-led initiatives—to provide more accessible and effective mental health support.

Insight

The COM-B Model: How to Move from “Stay on the Shelf” to Dynamic Strategic Plans

Strategic planning often faces resistance, frequently becoming a formality rather than a practical tool. This article argues that a strong, adaptable strategic plan, especially one focusing on specific behavioral changes and necessary support, is key to organizational growth and impact, regardless of size, offering a better way forward.

Notes illustration

Eager to learn about how behavioral science can help your organization?