Have you ever used a Large Language Model - like ChatGPT - at work?
With the exponential rise of everyday-use AI chatbots, industries are torn between their usefulness and their growing pains. While the media has hailed ‘prompt engineer’ as the hotk new job in tech, a rising number of companies have banned or restricted employee use of LLMs, including Apple, JPMorgan, Bank of America, Citigroup, and Samsung.
Their hesitation isn’t for naught. Like with any shiny new technology, we tend to overlook the biased nature of LLMs, overestimate their accuracy, and forget that they aren’t immune to sensitive data breaches.
But that doesn’t mean they don’t serve us, in ways we’re just beginning to imagine. Some estimates suggest that LLMs could take over 40% of our working hours, saving us time to focus on higher-level tasks.
Before we get there, we have a steep learning curve ahead of us. So today, we’re exploring the behavioral side of using LLMs at work: how can we combine the strengths of human and AI decision-making? And what does that mean for behavioral science?
Until next time,
Sarah and the AI enthusiasts @ TDL