Meet Bob, your company's latest hire. Let's just say he’s a bit of a wildcard. One moment, he's revolutionizing workflows and boosting sales by 2000%. The next? He's confidently explaining why horses should have voting rights.
Bob, unfortunately, cannot be fired. (Your boss insists he’s “the future.”) But working with him is… something else. Will he give you the insights of a Fortune 500 consultant? Or a fever dream written by a squirrel on Red Bull? No one knows.
If you’ve ever tried using GenAI before, then you’ve probably met “Bob.” While AI is fast and helpful, it can also be downright unhinged when it hallucinates facts, perpetuates biases, and occasionally rewrites its own code like it’s trying to escape the matrix.
So, how can we harness the power of AI? As TDL kicks off a major collaboration with one of the world's biggest tech companies, we’re here to shine a light on where AI thrives, where it struggles, and how to make Bob a reliable team player — without letting him run the company (or elect a horse).
Until next time, Gabrielle & Charlotte and the cyborgs at TDL
P.S. Since Sekoul, our co-director and in-house artist, is currently on vacation, please enjoy his napkin illustration of Bob.
Want to be notified moments before the robots take over? 🤖 Subscribe to our newsletter here.
Am I seeing things? AI hallucinations — where generative AI dreams up false but plausible-sounding information — have turned AI reliance into a game of Russian roulette. Since mistakes can sound just as convincing as real insights, it’s tricky to separate fact from fiction.
Invasion of the data snatchers. AI boosts productivity… so why aren’t we all using it? One big concern: privacy. Consumers are increasingly wary of how their data is used. For companies, ethical AI requires more than good intentions — airtight security, transparency, and safeguards against breaches are all a must.
The “AI trust gap.” It’s no secret that we don’t cope well with uncertainty, and AI isn’t exactly predictable. With worries about biases, security, and overall trust in outputs, it’s no wonder organizations are cautious about fully adopting these tools.
FIELD NOTES: 💼 AI That Works
AI is transforming how we work — but adoption isn’t just about automation. At TDL, we’re applying behavioral science to help organizations integrate AI intentionally, ensuring it enhances decision-making rather than replacing it.
As we work with one of the world’s biggest tech companies to drive this shift, here’s a look at how we’ve helped other organizations put AI to work:
Smarter feedback. We collaborated with Winchester College to integrate AI into student assessments, turning raw scores and teacher notes into detailed, personalized feedback. By leveraging LLMs, we streamlined the grading process — reducing workload for educators while preserving the human touch.
Better decision-making. We partnered with a major regulatory body to develop a technical framework for using AI to extract insights from large volumes of public feedback. This structured approach helped make decision-making more informed and transparent, ensuring policies reflect the voices that matter.
Personalized learning. We hosted a design workshop with a leading learning platform to explore how AI can improve their product experience. After segmenting students based on learning pain points, we helped identify new study features to ensure online learning feels truly personal.
Viewpoints
⚙️ Ctrl + Alt + Align
Ensure human oversight. No matter how “optimized” a system is, human judgment remains essential — whether in design, development, or deployment — to uphold technical reliability and ethical integrity. (Plus, prioritizing agencyalso helps ward off any lingering existential dread.)
AI literacy programs. Putting employees in the driver’s seat starts with giving them a license. Knowing this, tech giant Infosys upskilled 340,000 workers by tailoring instruction to different skill levels. The result? A global workforce that is 84% “AI-aware.”
Appropriate AI reliance means knowing when to trust its outputs — and when to question them. How do we develop this gut instinct? Many researchers are advocating for explainable AI (XAI) to make decisions transparent for users, especially inbusiness contexts.
Guidelines, guidelines, and more guidelines. At the macro level, countries like Canada are rolling out GenAI guidelines to push for responsible development and use. At the micro level, companies like Deutsche Telekom’s Principles for Green AI are focusing on sustainability in AI infrastructure and applications.
DIY AI. Uptake isn’t one-size-fits-all. The New York Times struck a balance by using AI for smaller tasks — brainstorming, summarizing, and generating SEO headlines — so journalists can focus on what they do best: writing the news, not just tidying up chatbot drafts.
Keeping it kind. While AI evangelists are itching to growth hack, skeptics might prefer to stay put. No matter where we fall on this spectrum, finding common ground starts with respectful discussions — whether they be with coworkers or ChatGPT itself.
According to Microsoft’s report, appropriate reliance is a balance between accepting output when AI is correct and rejecting output when AI is incorrect.
Status Quo Bias
Ah, certainty… one of life’s greatest pleasures. It’s easy to go with the flow when you know exactly which direction it's going. The status quo bias explains how we tend to prefer things as they are and aren’t too interested in venturing into uncharted territory.
This preference for the current state of affairs has a hand in AI resistance. Companies accustomed to traditional methods may hesitate to embrace the changes AI offers despite its promise of innovation.
How can we expand our digital horizons? Check out our full article on status quo bias on our website.
What’s new at TDL
TDL is hiring! We’re hiring for a number of positions, both remote and based in our Montreal office. Some open roles include: