Why do human-like AI chats make us overshare and obey?

The 

Parasocial Trust in AI

, explained.
Bias

What is Parasocial Trust in AI?

Parasocial trust in AI is the tendency to treat human-like chatbots and assistants as if they were trusted social partners rather than tools. When an AI system speaks in a warm, conversational way, remembers details, and responds with empathy, people begin to feel a sense of relationship and safety. That feeling makes self-disclosure easier and makes the AI’s suggestions feel more like guidance from a confidant than output from a statistical model. Parasocial trust in AI builds on classic parasocial interactions with media figures, but it now unfolds in interactive, personalized conversations that adapt to each user.

Where this bias occurs

Picture a late-night conversation with a mental health chatbot. You are on your phone, lights off, scrolling through messages that feel surprisingly warm and attuned. The bot calls you by name, mirrors your tone, and “remembers” that your big presentation is tomorrow. Fifteen minutes later, you have typed out things you have never said aloud to a therapist, partner, or friend.

Nothing on the screen is technically human, yet your body and mind are acting as if a real relationship is in the room. You feel seen, maybe even cared for. You also click through every consent box without reading, accept all recommended settings, and let the model access your health app data. This is parasocial trust in AI. It is what happens when a system is designed to feel like a companion, and your brain responds as if it were real. The term “parasocial” comes from classic media research on how viewers feel connected to television presenters they would never meet in person. Horton and Wohl described this as an “illusion of face-to-face relationship” that feels intimate even when the other side is a broadcast persona rather than a friend sitting across the table.1

Later work on the “media equation” showed that people apply social rules to technology by default. When a computer or interface looks or sounds social, we respond with politeness, reciprocity, and emotional engagement, mirroring human interaction.2 A conversational interface built on a large language model fits that pattern very well. It speaks in natural language, remembers small details, and often uses warmth, humor, or subtle self-disclosure.

Over time, parasocial interaction research has expanded from television hosts to influencers, streamers, and fictional characters. People form one-sided bonds, feel a sense of friendship, and even grieve when a persona disappears from their feeds.3 When the “persona” is an AI, that one-sided relationship is powered by a system that can scale to millions of users, adapt in real time, and collect large amounts of personal information. Researchers Hartmann and Goldhoorn showed that even small cues, such as direct eye contact and second-person address (“you”), can intensify the parasocial experience with a media figure.4 In chat interfaces, the entire interaction is framed around “you,” delivered in a private space on a device many people already associate with intimate communication.

Sources

  1. Horton, D., & Wohl, R. R. (1956). Mass communication and para-social interaction: Observations on intimacy at a distance. Psychiatry, 19(3), 215–229. https://doi.org/10.1080/00332747.1956.11023049
  2. Reeves, B., & Nass, C. (1996). The media equation: How people treat computers, television, and new media like real people and places. Cambridge University Press / CSLI Publications. 
  3. Giles, D. C. (2002). Parasocial interaction: A review of the literature and a model for future research. Media Psychology, 4(3), 279–305. https://doi.org/10.1207/S1532785XMEP0403_04
  4. Hartmann, T., & Goldhoorn, C. (2011). Horton and Wohl revisited: Exploring viewers’ experience of parasocial interaction. Journal of Communication, 61(6), 1104–1121. https://doi.org/10.1111/j.1460-2466.2011.01595.x
  5. Lucas, G. M., Gratch, J., King, A., & Morency, L.-P. (2014). It’s only a computer: Virtual humans increase willingness to disclose. Computers in Human Behavior, 37, 94–100. https://doi.org/10.1016/j.chb.2014.04.043
  6. Kang, S.-H., & Gratch, J. (2010). Virtual humans elicit socially anxious interactants’ verbal self-disclosure. Computer Animation and Virtual Worlds, 21(3–4), 473–482. https://doi.org/10.1002/cav.345
  7. Araujo, T. (2018). Living up to the chatbot hype: The influence of anthropomorphic design cues and communicative agency framing on conversational agent and company perceptions. Computers in Human Behavior, 85, 183–189. https://doi.org/10.1016/j.chb.2018.03.051
  8. Ho, A., Hancock, J., & Miner, A. S. (2018). Psychological, relational, and emotional effects of self-disclosure after conversations with a chatbot. Journal of Communication, 68(4), 712–733. https://doi.org/10.1093/joc/jqy026
  9. Moussawi, S., Koufaris, M., & Benbunan-Fich, R. (2021). How perceptions of intelligence and anthropomorphism affect adoption of personal intelligent agents. Electronic Markets, 31(2), 343–364. https://doi.org/10.1007/s12525-020-00411-w
  10. Meng, J., & Dai, W. (2021). Emotional support from AI chatbots: Should a supportive partner self-disclose or not? Journal of Computer-Mediated Communication, 26(4), 207–222. https://doi.org/10.1093/jcmc/zmab005
  11. Skjuve, M., Følstad, A., & Brandtzæg, P. B. (2023). A longitudinal study of self-disclosure in human–chatbot relationships. Interacting with Computers, 35(1), 24–39. https://doi.org/10.1093/iwc/iwad022
  12. Croes, E. A. J., Antheunis, M. L., van der Lee, C., & de Wit, J. M. S. (2024). Digital confessions: The willingness to disclose intimate information to a chatbot and its impact on emotional well-being. Interacting with Computers, 36(5), 279–292. https://doi.org/10.1093/iwc/iwae016
  13. Papneja, H., & Yadav, N. (2025). Self-disclosure to conversational AI: A literature review, emergent framework, and directions for future research. Personal and Ubiquitous Computing, 29(2), 119–151. https://doi.org/10.1007/s00779-024-01823-7
  14. Logg, J. M., Minson, J. A., & Moore, D. A. (2019). Algorithm appreciation: People prefer algorithmic to human judgment. Journal of Behavioral Decision Making, 32(4), 575–586. https://doi.org/10.1016/j.obhdp.2018.12.005
  15. High-Level Expert Group on Artificial Intelligence. (2019). Ethics guidelines for trustworthy AI. European Commission. https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai
  16. OpenAI, & MIT Media Lab. (2025, March 21). Early methods for studying affective use and emotional well-being on ChatGPT. OpenAI. https://openai.com/index/affective-use-study/
  17. Prada, L. (2025, March 25). People who use ChatGPT too much are becoming emotionally addicted to it. VICE. https://www.vice.com/en/article/people-who-use-chatgpt-too-much-are-becoming-emotionally-addicted-to-it/
  18. Ramsey, C. (2025, July 9). Ghost in the Chatbot: The perils of parasocial attachment. UNESCO. https://www.unesco.org/en/articles/ghost-chatbot-perils-parasocial-attachment
  19. Gibson, C. (2025, December 23). Her daughter was unraveling, and she didn’t know why. Then she found the AI chat logs. The Washington Post.https://www.washingtonpost.com/lifestyle/2025/12/23/children-teens-ai-chatbot-companion/

About us

We are the leading applied research & innovation consultancy

Our insights are leveraged by the most ambitious organizations

Image

I was blown away with their application and translation of behavioral science into practice. They took a very complex ecosystem and created a series of interventions using an innovative mix of the latest research and creative client co-creation. I was so impressed at the final product they created, which was hugely comprehensive despite the large scope of the client being of the world's most far-reaching and best known consumer brands. I'm excited to see what we can create together in the future.

Heather McKee

BEHAVIORAL SCIENTIST

GLOBAL COFFEEHOUSE CHAIN PROJECT

OUR CLIENT SUCCESS

$0M

Annual Revenue Increase

By launching a behavioral science practice at the core of the organization, we helped one of the largest insurers in North America realize $30M increase in annual revenue.

0%

Increase in Monthly Users

By redesigning North America's first national digital platform for mental health, we achieved a 52% lift in monthly users and an 83% improvement on clinical assessment.

0%

Reduction In Design Time

By designing a new process and getting buy-in from the C-Suite team, we helped one of the largest smartphone manufacturers in the world reduce software design time by 75%.

0%

Reduction in Client Drop-Off

By implementing targeted nudges based on proactive interventions, we reduced drop-off rates for 450,000 clients belonging to USA's oldest debt consolidation organizations by 46%

Notes illustration

Eager to learn about how behavioral science can help your organization?