Cartoon drawing illustrating how misinformation is spread from person to person.
👋
Hi there,

Okay, I admit it. I lied. That was totally clickbait. Fake news isn’t banned — and neither was TikTok, apparently. Conspiracies are spreading like wildfire (and about it, too). And now, with tech titans joining forces, we’ve all been left thinking: what could be more Meta than misinformation about misinformation?

Now, before we get ahead of ourselves, let’s start by setting a few things straight. 

First off, fake news isn’t new. Misinformation, along with its conniving counterpart, disinformation, has been around forever, enabled by a slew of bad actors who, unfortunately, know exactly how to play their part.

Second, AI isn’t to blame — at least, not the way we’re blaming it now. Although LLMs generate misinformation faster and more convincingly than ever, they are ultimately tools, not traitors. And the way we humans respond? That remains the same. (Nothing screams “System 1” like impulse sharing.)

Third, neither side is solely responsible. We can all (hopefully) get behind the fact that fake news is one of the most politically polarizing issues out there, making it feel impossible to have a conversation across the aisle. However, making progress means staying level-headed while discussing biases — and recognizing our own, too.

So, rather than pointing fingers at the robots or each other, let’s take a step back and figure out how we, as a species, can win the war on misinformation.

Until next time,
Gabrielle and the healthy skeptics at TDL

📧 Craving facts over fiction? Don’t fret; our newsletter has got you covered. Sign up here.
Today’s topics 👀
Deep dive: 🤞 A 100% True History of Fake News
Field notes: 🌀 A New SPIN on Misinformation
Viewpoints: 🥊 Fighting for Facts
DEEP DIVE
🤞 A 100% True History of Fake News

Dubbed Homo narrans, humans are natural storytellers. Our narratives don’t just reflect reality — they shape it. British evolutionary biologist Richard Dawkins introduced the concept of "memes." No, not just internet jokes, but cultural ideas that spread and evolve, similar to genes. And just like any idea, misinformation has been mutating across generations.

Let’s take a quick look at how different media have shaped its evolution:

  • Archaic tweets. Turns out, Western civilization has always been rooted in X-style slander. Augustus, Rome’s first emperor, mastered political propaganda, spreading defamation about his rival through sharp, memorable slogans inscribed on coins.
  • Word-of-mouth. Before mass literacy, gossip was one of the few ways that women, non-citizens, and enslaved people could challenge those in power. But rumors weren’t just weapons — they were also vital tools for exchanging information in medieval times (and still are today!).
  • The Printing Press. Did you know that men first walked on the moon in 1835? Of course, they didn’t — but a series of publications by the New York Sun had even Yale scientists convinced otherwise. The Great Moon Hoax showed how print media could spread fake news to intergalactic heights. (Thanks, Gutenberg.)
  • The Internet. The World Wide Web is spun with lies. Fake news spreads faster and wider than the truth online, which is especially problematic given that over half of Americans get at least some of their news from social media. When entertainment drives engagement, “false but interesting” will always outrun “true but boring.”
  • AI. Although AI puts the “artificial” in intelligence, we can’t pretend that it’s the only mastermind. A 2018 MIT study found that humans — not bots — are the real drivers of misinformation. And with the rise of LLMs, one thing remains clear: algorithms don't just create biases. They amplify ours.
  •  
    Graphic illustrating how users rate the difficulty of identifying misinformation across several social media platforms.
    When it comes to social media, misinformation spreads across all platforms — though it's easier to spot on some more than others.
    FIELD NOTES: 🌀 A New SPIN on Misinformation

    Let’s face it: misinformation comes in many forms. From doxing to cherrypicking to typosquatting, keeping up with the latest fake news trends can feel impossible.

    That’s why we developed Sorting Potentially Inaccurate Narratives (SPIN), a new taxonomy for identifying and organizing misinformation. We sorted over 50 terms based on three dimensions — psychological, content, and source — to provide practitioners with a comprehensive resource for developing interventions.

    What sets SPIN apart? It’s interdisciplinary by design, covering areas like education and politics to make cross-sector comparisons seamless.

    Ready to equip yourself with this misinformation toolkit? Learn more about SPIN here.

     
    SPIN
    Viewpoints
    🥊 Fighting for Facts

    Does technology help or hurt in the fight against misinformation? Experts are torn, but one thing remains clear: fake news isn’t going anywhere, and neither are the platforms that spread it. In fact, key metrics for tracking misinformation are disappearing — like TikTok removing hashtag view counts and Meta shutting down CrowdTangle

    With transparency, well, less-than-transparent, what’s the best solution? Here are some emerging approaches. 

  • Just ban it. In case you've been living under a rock, the U.S. almost banned TikTok, citing concerns over foreign propaganda. But until domestic regulations tighten, external influence will remain a risk — no matter who owns the app.
  • A community approach. Meta recently followed X’s lead by announcing they’ll be replacing fact-checking with Community Notes. Some call this progress; others, the opposite. Regardless, studies show users tend to trust community notes more, thanks to the added context.
  • Bringing AI into the battle. Some research suggests that machine learning outperforms humans at detecting deception. The perk? The faster algorithms detect deceptive content, the less likely users are to be influenced during the decision-making process.
  • Reframing the fight. Instead of just fighting misinformation, perhaps we should be fighting for information — that is, investing in ways to boost trust in reliable sources. A shift in strategy could make all the difference.
  • Exploring new ground. Some researchers are tackling misinformation with innovative solutions. While Sander van der Linden is developing psychological "vaccines" against disinformation, Jay Van Bavel is studying how a vocal minority on social media can distort reality.
  • !
    The Problem with Verification ☑️

    Are you more likely to trust an account with a blue check mark? This is probably thanks to the authority bias: our tendency to be more influenced by “expert opinions” — or at least those who claim to be experts.

    The problem? Well, according to one study, verified users are some of the biggest culprits when it comes to spreading fake news, being more likely to create and spread it widely. So next time you see that check mark, make sure you double-check the claim before clicking “like.”

    What’s new at TDL

    TDL is hiring! We’re hiring for a number of positions, both remote and based in our Montreal office. Some open roles include: 

  • Consultant (CA)
  • Consultant (MX)
  • Project Leader (CA)
  • Project Leader (MX)
  • Summer Content Intern 2025
  • Find out more by visiting our careers portal.

    Want to have your voice heard? We'd love to hear from you. Reply to this email to share your thoughts, feedback, and questions with the TDL team.
    THE DECISION LAB
    linkedin facebook twitter

    The Decision Lab

    4030 St Ambroise Street,Suite 413

    Montreal, Quebec

    H4C 2C7, Canada 

    © 2022 The Decision Lab. All Rights Reserved
    4030 St Ambroise Street Quebec The Decision Lab Montreal