Humans and robots circle a clock hand-in-hand
👋
Hi there,

Have you ever used a Large Language Model - like ChatGPT - at work? 

With the exponential rise of everyday-use AI chatbots, industries are torn between their usefulness and their growing pains. While the media has hailed ‘prompt engineer’ as the hotk new job in tech, a rising number of companies have banned or restricted employee use of LLMs, including Apple, JPMorgan, Bank of America, Citigroup, and Samsung.

Their hesitation isn’t for naught. Like with any shiny new technology, we tend to overlook the biased nature of LLMs, overestimate their accuracy, and forget that they aren’t immune to sensitive data breaches

But that doesn’t mean they don’t serve us, in ways we’re just beginning to imagine. Some estimates suggest that LLMs could take over 40% of our working hours, saving us time to focus on higher-level tasks. 

Before we get there, we have a steep learning curve ahead of us. So today, we’re exploring the behavioral side of using LLMs at work: how can we combine the strengths of human and AI decision-making? And what does that mean for behavioral science?

Until next time,

Sarah and the AI enthusiasts @ TDL
📧 Wanna prepare for our future robot overlords? Sign up for the newsletter here.
Today’s topics 👀
🦾 DEEP DIVE: Building the Human-AI Alliance at Work
🥸 FIELD NOTES: Biased or Bias-Buster?
🤖 VIEWPOINTS: The Chatbot-Era of Behavioral Science
DEEP DIVE
🦾 Building the Human-AI Alliance at Work
  • Asking the right questions. Expert knowledge isn’t just about what we know. By asking the right questions – i.e. becoming accomplished prompt engineers – we can start to build on the collaborative new research methods between human and generative AI.
  • Adopting the Centaur Model. Human expertise isn’t in competition with generative AI – in fact, they’re stronger together. This Oxford-Harvard research partnership proposes the human-algorithm Centaur Model, a combination projected to decrease patient readmission rates in hospitals by over 26%.
  • Overcoming decision paralysis. Have a pile of documents to sift through? Too many tasks? Hitting decision fatigue? Plenty of users have found immense guidance in using language processing models to coordinate to-do lists, optimize schedules, or organize email backlogs.
  • Productivity boosts. An MIT study found that introducing generative AI technology increased quality of work, productivity, job satisfaction and self-efficacy. The trick? AI was able to shift tasks away from drafting and towards creativity and editing.
FIELD NOTES: Biased or Bias-Buster?

AI algorithms are on the rise in the workplace – but due to cases of misuse, human trust in them is still trailing behind.

While it’s true that AI algorithms can perpetuate human stereotypes and historical bias, they can also be tools to overcome it. AI can help anonymize information and address systemic bias – plus, it’s a lot easier to call out an algorithm for its bias than a human. 

To learn more, check out AI Algorithms at Work: How to use AI to Help Overcome Historical Bias.

A hand typing at a desktop computer.
Viewpoints
🤖 The Chatbot-Era of Behavioral Science
  • Same, but different. ChatGPT is sensitive to framing and reference points – but it doesn’t care all that much about sunk costs or the endowment effect. A Canadian study found that while ChatGPT is biased, it’s not biased in the same way humans are. Its biggest flaw? That’s right: its well-documented overconfidence
  • Hidden bias. A team of behavioral researchers investigated ChatGPT’s hidden biases by asking it poverty-related questions. With the right prompt engineering, AI chatbots might end up less biased than the average American – but they also found that the chatbot was easily swayed into perpetuating harmful narratives about poverty. 
  • Reframing AI survey bots. It can be hard to distinguish between human and AI responses. While it’s currently a problem for researchers conducting surveys, it might also be the future of psychological research
5% of Information job postings in the U.S. in 2022 requested AI skills
Percentage of job postings in the US in 2022 requesting AI skills, From Stanford University’s ‘2023 State of AI in 14 Charts’
!
The Google Effect

First coined in 2011, the Google Effect describes the decrease in memory retrieval we experience when looking for information on search engines.

It works just like the GPS effect, in which an observable dip in spatial memory during self-navigation was observed after its rise in popularity.

The coming years might even see the ChatGPT effect: a mental decline in the skills that can now be accomplished by LLMs.

Opportunities in Behavioral Science

TDL is hiring! We’re hiring for a number of positions, both remote and based in our Montreal office. Some open roles include: 

  • Consultant
  • Research Analyst
  • Senior UX Designer

Find out more by visiting our careers portal

Want to have your voice heard? We'd love to hear from you. Reply to this email to share your thoughts, feedback, and questions with the TDL team.
THE DECISION LAB
linkedin facebook twitter

The Decision Lab

4030 St Ambroise Street,Suite 413

Montreal, Quebec

H4C 2C7, Canada 

© 2022 The Decision Lab. All Rights Reserved
4030 St Ambroise Street Quebec The Decision Lab Montreal