Have you ever used a Large Language Model - like ChatGPT - at work?
With the exponential rise of everyday-use AI chatbots, industries are torn between their usefulness and their growing pains. While the media has hailed ‘prompt engineer’ as the hottest new job in tech, a rising number of companies have banned or restricted employee use of LLMs, including Apple, JPMorgan, Bank of America, Citigroup, and Samsung.
But that doesn’t mean they don’t serve us, in ways we’re just beginning to imagine. Some estimates suggest that LLMs could take over 40% of our working hours, saving us time to focus on higher-level tasks.
Before we get there, we have a steep learning curve ahead of us. So today, we’re exploring the behavioral side of using LLMs at work: how can we combine the strengths of human and AI decision-making? And what does that mean for behavioral science?
Until next time,
Sarah and the AI enthusiasts @ TDL
📧 Wanna prepare for our future robot overlords? Sign up for the newsletter here.
Today’s topics 👀
🦾 DEEP DIVE: Building the Human-AI Alliance at Work
🥸 FIELD NOTES: Biased or Bias-Buster?
🤖 VIEWPOINTS: The Chatbot-Era of Behavioral Science
🦾 Building the Human-AI Alliance at Work
Asking the right questions. Expert knowledge isn’t just about what we know. By asking the right questions – i.e. becoming accomplished prompt engineers – we can start to build on the collaborative new research methods between human and generative AI.
Productivity boosts. An MIT study found that introducing generative AI technology increased quality of work, productivity, job satisfaction and self-efficacy. The trick? AI was able to shift tasks away from drafting and towards creativity and editing.
FIELD NOTES: Biased or Bias-Buster?
AI algorithms are on the rise in the workplace – but due to cases of misuse, human trust in them is still trailing behind.
While it’s true that AI algorithms can perpetuate human stereotypes and historical bias, they can also be tools to overcome it. AI can help anonymize information and address systemic bias – plus, it’s a lot easier to call out an algorithm for its bias than a human.
Hidden bias. A team of behavioral researchers investigated ChatGPT’s hidden biases by asking it poverty-related questions. With the right prompt engineering, AI chatbots might end up less biased than the average American – but they also found that the chatbot was easily swayed into perpetuating harmful narratives about poverty.
Reframing AI survey bots. It can be hard to distinguish between human and AI responses. While it’s currently a problem for researchers conducting surveys, it might also be the future of psychological research.