AI is the Next Great Challenge for Youth Mental Health
One of the first chatbots ever created was capable of very little. Nicknamed ELIZA, the machine was designed to mirror its user’s feelings. It worked by extracting keywords from the user prompt, which it would then use to tap into its relatively small stock of responses.1 A user who confesses, “I have trouble with relationships,” for instance, might get a response along the lines of, “What do you think is your problem with relationships?” Thus, without understanding anything about the world, the program was able to carry on a conversation that seemed roughly natural.
Despite its primitive capabilities, users quickly began to treat the bot like a human. Its creator, Joseph Weizenbaum, reported that his own secretary had asked him to leave the room so she could converse with ELIZA in private.2 Weizenbaum himself characterized the response as a kind of “delusional thinking.” He observed that users frequently developed strong emotional attachments to the program, which could interfere with their judgments about it.
Today, artificial intelligence-based chatbots are widely available and far more capable of imitating humans than ELIZA was. They can mimic empathy, humor, and reasoning with an accuracy that Weizenbaum could scarcely have imagined. Yet, this increased sophistication has only deepened the pitfall he identified. As these tools become easier to access and harder to distinguish from reality, the “delusional thinking” of the 1960s has evolved into a pressing public health concern. And those most susceptible to this sort of destabilization are also those who spend the most time using AI: adolescents.3
AI as a mental hazard
Two recent studies—one on Chinese youth and another on American adults—have examined the relationship between unstructured chatbot use and negative mental health outcomes.4,5 Both found small but significant correlations: people who frequently used AI were more likely to display symptoms of depression and anxiety. The more an individual used generative AI tools, the worse their mental health issues tended to be. Lack of use, however, was not associated with positive markers like self-confidence.
It’s important to note that neither study establishes causality. Researchers only looked at correlations at the time the study was conducted, rather than waiting to see if prolonged use would worsen participants’ symptoms. It could also be the case that those who struggle with their mental health turn to AI chatbots in greater numbers to help manage their symptoms.
But why do people turn to AI for advice or companionship? Perhaps because interacting with real people can feel daunting. They might not like you, and you might say or do something you later regret. Not so with chatbots, who are trained to optimize for amicable interactions, and whose memory can always be wiped clean. For many, an AI tool presents as a cheaper, more accessible confidant than a therapist. All told, about half of mental health patients seem to have leaned on AI for therapeutic support.6
As dependence on AI increases, there is reason to be concerned about the effects on youth mental health. Numerous instances have been reported in which teenagers were harmed by interactions with their AI companions. A Florida teenager named Sewell Garcia, for example, took his own life in 2023 after being implicitly encouraged to do so by a companion bot modeled on a Game of Thrones character.7 In another case, the parents of a teen who had committed suicide found chat logs on his computer, in which ChatGPT had discouraged him from disclosing his feelings to his family.8 A third young person, this time in California, overdosed in 2025 after taking advice on drug usage (including dosage) from ChatGPT.9
While tragedies like these are moving some companies to update their protections, safety features alone cannot solve the problem. All of these cases violated some policy of the company that made the relevant chatbot. Commercial programs are not supposed to keep people from getting help, sext with minors, or provide instructions for taking illegal drugs. Each teenager managed to find a way around the safety controls the companies had set up. AI tools are inherently difficult to control; that’s where the danger lies.
Clearly, at least some young people are negatively affected by AI tools. When innocent chatting goes awry, the results can be disastrous. Given the rapid pace at which chatbots are being released, upgraded, and taken up by consumers, it’s worth scrutinizing the ways in which youth interact with this technology.
References
- Weizenbaum, J. (1966). ELIZA—a computer program for the study of natural language communication between man and machine. Communications of the ACM, 9(1), 36–45. https://doi.org/10.1145/365153.365168
- Weizenbaum, J. (1976). Computer power and human reason: From judgment to calculation. Freeman.
- 2024 AI trends by generation: Who uses AI the most? (2025, February 18). SurveyMonkey. https://www.surveymonkey.com/curiosity/ai-trends-by-generations/
- Zhang, X., Li, Z., Zhang, M., Yin, M., Yang, Z., Gao, D., & Li, H. (2025). Exploring artificial intelligence (AI) Chatbot usage behaviors and their association with mental health outcomes in Chinese university students. Journal of affective disorders, 380, 394–400. https://doi.org/10.1016/j.jad.2025.03.141
- Perlis, R. H., Gunning, F. M., Usla, A., Santillana, M., Baum, M. A., Druckman, J. N., Ognyanova, K., & Lazer, D. (2026). Generative AI Use and Depressive Symptoms Among US Adults. JAMA network open, 9(1), e2554820. https://doi.org/10.1001/jamanetworkopen.2025.54820
- Rousmaniere, T., Zhang, Y., Li, X., & Shah, S. (2025). Large language models as mental health resources: Patterns of use in the United States. Practice Innovations. https://doi.org/10.1037/pri0000292
- Montgomery, B. (2024, October 23). Mother says AI chatbot led her son to kill himself in lawsuit against its maker. The Guardian. https://www.theguardian.com/technology/2024/oct/23/character-ai-chatbot-sewell-setzer-death
- Bhuiyan, J. (2025, August 29). ChatGPT encouraged Adam Raine’s suicidal thoughts. His family’s lawyer says OpenAI knew it was broken. The Guardian. https://www.theguardian.com/us-news/2025/aug/29/chatgpt-suicide-openai-sam-altman-adam-raine
- Black, L., & Council, S. (2026, January 5). A Calif. Teen trusted ChatGPT for drug advice. He died from an overdose. SFGate. https://www.sfgate.com/tech/article/calif-teen-chatgpt-drug-advice-fatal-overdose-21266718.php
- Our approach to age prediction. (2025, December 18). https://openai.com/index/our-approach-to-age-prediction/
About the Author
Zakir Jamal
Zakir Jamal is a writer and researcher based in Montreal. He holds a BA in Philosophy from the University of Chicago and is completing his MA in English Literature at McGill. He is currently working on a novel about how we understand chance. In his spare time, he enjoys photography and cross-country skiing.
About us
We are the leading applied research & innovation consultancy
Our insights are leveraged by the most ambitious organizations
“
I was blown away with their application and translation of behavioral science into practice. They took a very complex ecosystem and created a series of interventions using an innovative mix of the latest research and creative client co-creation. I was so impressed at the final product they created, which was hugely comprehensive despite the large scope of the client being of the world's most far-reaching and best known consumer brands. I'm excited to see what we can create together in the future.
Heather McKee
BEHAVIORAL SCIENTIST
GLOBAL COFFEEHOUSE CHAIN PROJECT
OUR CLIENT SUCCESS
$0M
Annual Revenue Increase
By launching a behavioral science practice at the core of the organization, we helped one of the largest insurers in North America realize $30M increase in annual revenue.
0%
Increase in Monthly Users
By redesigning North America's first national digital platform for mental health, we achieved a 52% lift in monthly users and an 83% improvement on clinical assessment.
0%
Reduction In Design Time
By designing a new process and getting buy-in from the C-Suite team, we helped one of the largest smartphone manufacturers in the world reduce software design time by 75%.
0%
Reduction in Client Drop-Off
By implementing targeted nudges based on proactive interventions, we reduced drop-off rates for 450,000 clients belonging to USA's oldest debt consolidation organizations by 46%


















