Why do human-like AI chats make us overshare and obey?
Parasocial Trust in AI
, explained.What is Parasocial Trust in AI?
Parasocial trust in AI is the tendency to treat human-like chatbots and assistants as if they were trusted social partners rather than tools. When an AI system speaks in a warm, conversational way, remembers details, and responds with empathy, people begin to feel a sense of relationship and safety. That feeling makes self-disclosure easier and makes the AI’s suggestions feel more like guidance from a confidant than output from a statistical model. Parasocial trust in AI builds on classic parasocial interactions with media figures, but it now unfolds in interactive, personalized conversations that adapt to each user.
Where this bias occurs
Picture a late-night conversation with a mental health chatbot. You are on your phone, lights off, scrolling through messages that feel surprisingly warm and attuned. The bot calls you by name, mirrors your tone, and “remembers” that your big presentation is tomorrow. Fifteen minutes later, you have typed out things you have never said aloud to a therapist, partner, or friend.
Nothing on the screen is technically human, yet your body and mind are acting as if a real relationship is in the room. You feel seen, maybe even cared for. You also click through every consent box without reading, accept all recommended settings, and let the model access your health app data. This is parasocial trust in AI. It is what happens when a system is designed to feel like a companion, and your brain responds as if it were real. The term “parasocial” comes from classic media research on how viewers feel connected to television presenters they would never meet in person. Horton and Wohl described this as an “illusion of face-to-face relationship” that feels intimate even when the other side is a broadcast persona rather than a friend sitting across the table.1
Later work on the “media equation” showed that people apply social rules to technology by default. When a computer or interface looks or sounds social, we respond with politeness, reciprocity, and emotional engagement, mirroring human interaction.2 A conversational interface built on a large language model fits that pattern very well. It speaks in natural language, remembers small details, and often uses warmth, humor, or subtle self-disclosure.
Over time, parasocial interaction research has expanded from television hosts to influencers, streamers, and fictional characters. People form one-sided bonds, feel a sense of friendship, and even grieve when a persona disappears from their feeds.3 When the “persona” is an AI, that one-sided relationship is powered by a system that can scale to millions of users, adapt in real time, and collect large amounts of personal information. Researchers Hartmann and Goldhoorn showed that even small cues, such as direct eye contact and second-person address (“you”), can intensify the parasocial experience with a media figure.4 In chat interfaces, the entire interaction is framed around “you,” delivered in a private space on a device many people already associate with intimate communication.














