Navigating the New AI Mental Health Landscape Among Youth
The Big Problem
Youth mental health is being reshaped by a digital world where the first listener might not be human. Artificial intelligence (AI) chatbots now act as confidants for many: they’re fluent, responsive, and endlessly patient. However, they lack what makes empathy human—context, reciprocity, and moral judgment. During COVID-19, when schools closed and in-person interaction with classmates faded, social learning all but stalled. The small, everyday moments that once built emotional regulation—group projects, hallway jokes, minor conflicts—were replaced by digital exchanges and a rapid influx of online mental health tools designed to fill the void.
Now, for many, chatbots sit at the front line of youth well-being. Most chatbots, however, weren’t designed for care; they were built for engagement. And while they might offer comfort on demand, that same design can deepen social avoidance or dependence. Traditional supports like therapy and school counseling can’t always compete with something that’s instant, available 24/7, accessible with a click, pervasive, and rarely contradicts you.
To make this new landscape safer, we need to understand the behaviors shaping it, and behavioral science’s well-equipped for that. By looking at how design cues, defaults, and social norms guide help-seeking online, we can build systems that keep AI easy to reach while bringing people back into care—so empathy feels shared, real, and human again.
About the Author
Maryam Sorkhou
Maryam holds an Honours BSc in Psychology from the University of Toronto and is currently completing her PhD in Medical Science at the same institution. She studies how sex and gender interact with mental health and substance use, using neurobiological and behavioural approaches. Passionate about blending neuroscience, psychology, and public health, she works toward solutions that center marginalized populations and elevate voices that are often left out of mainstream science.















