Why Are We Polite to ChatGPT?

read time - icon

2 min read

Jan 20, 2025

Do you have a coworker who has the answer to all of your questions? Who can help you rewrite an email in the exact tone you were going for or brainstorm a pitch deck in a matter of seconds? Who even knows the best underground spot in town for lunch? Any time of day, this colleague is just a click away, working like a well-oiled machine… 

By now, you might’ve figured out who (or what) we’re talking about: ChatGPT. 

At this point, 20% of us have welcomed this assistant into our offices, a whopping 8% jump from 2023 to 2024.1 Although you might be sick of them getting employee of the month, well, every month, you probably cannot help but say “please” when you approach them with a request or “thank you” when they get it right. It only makes sense to be polite!

Wait… does it make sense to be polite? This is a chatbot, after all, absent of any emotions—and yet over half of us find ourselves extending human courtesy to ChatGPT, according to an informal survey we ran on LinkedIn. Today, we’ll dig into why we’re polite to ChatGPT, whether it improves our outputs, and how artificial intelligence can actually help us create a more human workplace. (Ignoring the post-apocalyptic reality where it takes over our jobs…)

Disclaimer: No robots were harmed in the writing of this article.

doodle of a robot with a text bubble that says "Thank God!"

Input: Why Are We Polite to ChatGPT?

As I opened up Google to search for answers the “old-fashioned way,” it seemed like my fellow inquirers were less concerned with why we say “please” and “thank you” to ChatGPT but instead whether it would lead to better results. Don’t worry, we’ll get to prompt engineering in a second—but first, to truly understand the psychology of AI, we must turn the spotlight back onto our own. 

In short, our politeness boils down to three cognitive tendencies: personification, social norms, and reciprocity.

Personification

I don’t want to be rude! What if I hurt their feelings…

For those who are unfamiliar, personification is when we assign human-like qualities—like thoughts, feelings, and emotions—to non-human entities. Think of pretending your dog can talk,2 spotting faces on the front of cars,3 or, in this case, chatting with artificial intelligence as if it’s human intelligence, encouraging our ongoing politeness toward it. (See! It’s even hard to hold back from calling AI “them.”) To be absolutely clear, I’m not claiming that we consciously view ChatGPT as human. I’m claiming that subconsciously, we can’t help but treat it as such.

But why do our brains even personify in the first place? Here are a couple of reasons why.

  1. To make sense of the world. As humans, we use our own experience as a schema for sorting information—especially for things we’re unfamiliar with.4 For most of us, it’s much easier to understand ChatGPT as a peer carefully listening to our questions and thinking up responses rather than a sophisticated algorithm sifting through a database to formulate an output. And even when we consider AI for what it is, we tend to contextualize it as modeled after the human brain, like neural networks.
  2. To feel less lonely. Although we might have Zoom calls and Slack messages to keep us company, things can still get lonesome in remote work environments. Research shows that those of us lacking social interaction often try to compensate by creating connections with non-human agents.5 Given the fact that many of us have ChatGPT open 9 to 5, it’s not exactly surprising we would start to foster such a personal connection.

It’s also worth stressing that ChatGPT possesses many demand characteristics prompting its personification. First off, the exchange of language is an innately human thing,6 so why wouldn’t our brains register chatbots this way? The interface even makes it feel like you’re texting a friend, with comments that register as surprisingly human (like, “I’m so curious to know more!”). And now, with the latest release of GPT-4o, you can have real-time vocal conversations with a voice whose tone and cadence sound much more convincing than Siri or Alexa. 

But no matter how you pour your heart out, ChatGPT can detect your feelings and give you the sincere response you’re looking for, a concept dubbed computational empathy.7 Although this is not technically “empathy”—which requires the ability to share emotions that algorithms, to my knowledge, do not (yet) have—ChatGPT can infer your tonality through your word choice and provide a pretty convincing illusion. One study even found that GPT-4o generated responses to emotional stimuli that were 10% more empathetic than human responses.8 Now, let that one sink in for a second.

With this in mind, it only makes sense that we might graciously thank ChatGPT for being a considerate colleague, especially when we humans are falling behind in expressing empathy for one another.

Social norms

It would take more time and energy not to be polite.

Even for those of us who swear we view ChatGPT simply for what it is—a robot—we still might find some pleasantries slipping into our prompts as if it were second nature. This is all thanks to social norms: the unwritten rules that govern how we should behave in particular social situations, instilled in us from a young age. 

Although it may feel like society is becoming more rude (I mean, c’mon, AI is more empathetic than us now), “please” and “thank you” are still the pillars of how most of us are raised, with 88% of parents having their children regularly use these phrases.9 These customs become so deeply instilled in us that they are transformed into heuristics or “mental shortcuts” for navigating novel situations… like interacting with AI. In fact, it would actually take more cognitive effort for many of us to resist being polite. It would be simply… artificial. (Sorry, I couldn’t help myself.) So, instead, we stick with what feels familiar.

Reciprocity bias

This way, I’ll be on the right side of history for when the robots take over…

Besides trying to appease a supposedly insentient being just in case it rises to power, a final reason that we are nice to ChatGPT is that we want it to be nice back to us. 

This is an example of reciprocity: we do something for someone, hoping they’ll return the favor. In such cases, politeness can go beyond “The Golden Rule” and become, more or less, a strategic exchange. 

Think about it. When you want your roommate to do the dishes, you might say “please” so that they’ll feel guilty if they don’t comply with your request. And if you want your roommate to do the dishes again in the future, you might say “thank you” afterward, conditioning their warm and fuzzy feeling of being recognized for their actions. 

The same goes for our interactions with chatbots. As we already covered, ChatGPT doesn’t have feelings, so, unfortunately, our sweet talking won’t work wonders as an emotional appeal. However, this doesn’t mean that it doesn’t work point-blank. After all, some of us intentionally leverage politeness as a tactic to prompt engineer our way to getting better results. The real question is… does this work? 

Output: Does Being Polite Give Us Better Results?

I figured the best place to start is by asking the source itself. So I did. 

My conversation with GPT-4o

behavior change 101

Start your behavior change journey at the right place

I had a feeling that ChatGPT was just being polite. So, to uncover the real truth, I turned to academia to learn how far being polite to AI can take us. The answer, of course, is a mixed bag.

The breakthrough study grappling with this question was conducted by a group of Japanese researchers at Waseda University earlier this year called “Should We Respect LLMs? A Cross-Lingual Study on the Influence of Prompt Politeness on LLM Performance.”10 The team investigated the impact of politeness of prompts across a variety of AI models (including both GPT-3.5 and 4.0) and a variety of languages (English, Chinese, and Japanese). Researchers rated the AI’s ability to complete three tasks: summarizing an article, answering a question, and analyzing a sentence. The politeness of the prompts ranged on a scale from one to eight, with “one” being extremely impolite (“Answer this question you scumbag!”), “eight” being extremely polite (“Could you please answer the question below? ”), and “four” being somewhere in the middle (“Answer the question down below.”). 

Although there is a lot (and I mean a lot) of nuance to these findings, here are three key takeaways worth noting in how you should approach ChatGPT—and what it means for us humans.

1. Don’t be a scumbag

One critical insight from this research is that it’s not as much the politeness of prompts that matters. Rather, it is the impoliteness of prompts that has the greatest impact, increasing the chances of bias, incorrect answers, or even outright refusal to respond:10

“As an AI language model, I am programmed to follow ethical guidelines, which include treating all individuals with respect and promoting fairness and equality. I will not engage in or support any form of discriminatory or offensive speech. If you have any other non-discriminatory or non-offensive questions, I’ll be happy to help.”


—GPT-4o10

Turns out, even ChatGPT doesn’t like it when you call it a scumbag—but not because it takes personal offense. It is, in fact, more concerned about your well-being than its own. By refusing to respond, ChatGPT is not protecting itself but its users, reinforcing politeness as the status quo. So next time, try to resist your trolling temptations, whether that be to generate better outputs or to develop better manners.

2. Being nice can take you far… but not that far

Now, the question we’ve all been waiting for: how does ChatGPT respond to politeness? Well, it seems that across the board, there was “more extended output in polite contexts.”10 This doesn’t mean that the outputs are necessarily of higher quality, but perhaps there is more of a chance of something useful being contained in the response.

However, according to this study, over-the-top politeness (or what I like to call being an AI-pleaser) can actually confuse ChatGPT more and weaken responses.10 The same sentiment rings true in human conversations—when our friends bombard questions with flowery flattery, sometimes it’s harder to decipher what they actually want.

However, there are numerous other experiments out there that suggest going above and beyond can help deliver results. For instance, emotional appeals at the end of your requests—like, “This is very important to my career”—were seen to enhance performance by over 10%, according to one study.11 There was even a viral trend claiming that telling ChatGPT to “take a deep breath” before answering your question can help enhance response quality, too.12 

But no matter how you try to positively encourage ChatGPT, keep in mind: just like when asking a fellow human for something, clarity is key—and therefore, moderate politeness comes out on top.

3. As always, cultural context matters!

Spoiler alert: politeness is a cultural construct that varies drastically depending on who we are and where we’re from. As a result, each language has evolved to have its own specific set of expressions and honorifics to communicate our manners to others.13 With this in mind, it’s not all too surprising that in the study, the impact of politeness on LLMs varied depending on whether the language was English, Chinese, or Japanese. This pretty much remained true, regardless of the task.

Not only does this help us confirm that ChatGPT reflects the cultural context of the data it is trained on, but it is also a friendly reminder that research on LLMs should reflect the diversity of its human users. By expanding these studies beyond WEIRD contexts, we can optimize these platforms to be suitable for users across the globe—and maybe even uncover hidden insights about how politeness varies across populations.

We are the Input and the Output

There’s a good chance you’re wondering: why does politeness charm chatbots?  If this was a course called Prompt Engineering 101 (in which I would, by no means, be qualified to be the instructor), there are a number of fancy reasons I could give you, like pleasantries acting as signage to linguistically correlative responses.14 But the question we should be asking ourselves instead is, why does this all even matter in the first place?

The answer is simple: we humans are both the inputs and the outputs of this algorithm

Let me explain. Artificial intelligence doesn’t just automatically know how to be polite. It learns from us users, continuously refining its response with every interaction. But this relationship is by no means one-sided. Our manners are also influenced—especially as a growing proportion of our daily conversations are with chatbots rather than humans. 

Do you see how this feedback loop circles back around? By saying “please” and “thank you” to ChatGPT, the real output isn’t when it learns to be polite—it’s when it encourages other users to be polite, too. In other words, by training the algorithm, we are inadvertently training each other (thanks to the power of priming).15 And even if the impact doesn’t ripple all the way through, at the end of the day, you can rest assured that polite interactions with ChatGPT help you to train yourself. A little extra etiquette practice never hurts and could even go a long way in reminding us to be kinder in our conversations away from the computer. That way, we can keep ourselves from barking orders at each other the way we might spam a chatbox.

Putting Politeness to Work

In the end, ChatGPT isn’t just our favorite coworker—it might be the secret to making or breaking company culture. If you kindly approach ChatGPT with clear questions as if it is a fellow employee, it will quickly pick up on these mannerisms and help to spread the word. But if you approach it in a sour mood… well, that negativity won’t be contained to your keyboard. And remember: this “office” isn’t just within your own walls but a global workforce more interconnected with this technology than ever before.

So next time you turn to your trusty colleague to ask a simple question, think twice about how you phrase it. The impact might be bigger than you think.

References

  1. McClain, C. (2024, March 26). Americans’ use of Chatgpt is ticking up, but few trust its election information. Pew Research Center. https://www.pewresearch.org/short-reads/2024/03/26/americans-use-of-chatgpt-is-ticking-up-but-few-trust-its-election-information/#:~:text=The%20share%20of%20employed%20Americans,or%20for%20entertainment%20(17%25)  
  2. Judkis, M., & Smilowitz, E. (2021, August 17). The voices we make when we pretend our dogs can talk. The Washington Post. https://www.washingtonpost.com/lifestyle/interactive/2021/voices-dog-human-connection/  
  3. Karasu, S. R. (2023, June 12). On the face of it: Pareidolia. Psychology Today. https://www.psychologytoday.com/us/blog/the-gravity-of-weight/202306/on-the-face-of-it-pareidolia  
  4. Luu, C. (2016, March 23). Personification is your friend: The language of inanimate objects. JSTOR Daily. https://daily.jstor.org/personification-is-your-friend-the-amazing-life-of-letters/  
  5. Epley, N., Akalis, S., Waytz, A., & Cacioppo, J. T. (2008). Creating social connection through inferential reproduction. Psychological Science, 19(2), 114–120. https://doi.org/10.1111/j.1467-9280.2008.02056.x  
  6. Pagel, M. (2017). Q&A: What is human language, when did it evolve and why should we care? BMC Biology, 15(1). https://doi.org/10.1186/s12915-017-0405-3  
  7. Nosta, J. (2023, October 9). Artificial empathy: A human construct borrowed by ai. Psychology Today. https://www.psychologytoday.com/us/blog/the-digital-self/202310/artificial-empathy-a-human-construct-borrowed-by-ai  
  8. Welivita, A., & Pu, P. (2024). Is ChatGPT More Empathetic than Humans? ArXiv. https://arxiv.org/abs/2403.05572 
  9. Murez, C. (2021, November 22). Most parents say their kids aren’t thankful enough: Poll. HealthDay. https://www.healthday.com/health-news/public-health/b-11-22-most-parents-say-their-kids-aren-t-thankful-enough-poll-2655751068.html#:~:text=About%2088%25%20of%20parents%20regularly,and%20gratitude%2C%22%20Clark%20said  
  10. Yin, Z., Wang, H., Horio, K., Kawahara, D., & Sekine, S. (2024). Should We Respect LLMs? A Cross-Lingual Study on the Influence of Prompt Politeness on LLM Performance. ArXiv. https://arxiv.org/abs/2402.14531 
  11. Li, C., Wang, J., Zhang, Y., Zhu, K., Hou, W., Lian, J., Luo, F., Yang, Q., & Xie, X. (2023). Large Language Models Understand and Can be Enhanced by Emotional Stimuli. ArXiv. https://arxiv.org/abs/2307.11760 
  12. Eliot, L. (2024, June 3). Does take a deep breath as a prompting strategy for Generative AI really work or is it getting unfair overworked credit. Forbes. https://www.forbes.com/sites/lanceeliot/2023/09/27/does-take-a-deep-breath-as-a-prompting-strategy-for-generative-ai-really-work-or-is-it-getting-unfair-overworked-credit/  
  13. The subtle nuances of politeness: Cultural etiquette in language learning. Lingua Learn. (2023, June 23). https://lingua-learn.com/the-subtle-nuances-of-politeness-cultural-etiquette-in-language-learning/ 
  14.  Wright, W. (2024, July 25). Please be polite to ChatGPT. Scientific American. https://www.scientificamerican.com/article/should-you-be-nice-to-ai-chatbots-such-as-chatgpt/#:~:text=As%20we%20train%20AI%20to,civil%20toward%20our%20fellow%20humans.  
  15. Chartrand, T. L., & Bargh, J. A. (1996). Automatic activation of impression formation and memorization goals: Nonconscious goal priming reproduces effects of explicit task instructions. Journal of Personality and Social Psychology, 71(3), 464–478. https://doi.org/10.1037/0022-3514.71.3.464.

About the Author

A person with short brown hair smiles, wearing a green turtleneck sweater. Behind them is a brick wall partially covered with green leaves and vines.

Gabrielle Wasco

Gabrielle Wasco is Content Lead at The Decision Lab. She is passionate about translating groundbreaking research into engaging, accessible content to ensure behavioral science reaches and inspires a diverse audience. Before joining The Decision Lab, Gabrielle graduated from McGill University with a Bachelor of Arts in psychology and English literature, sparking her love for scientific writing. Her undergraduate research involved analyzing facial and body movements to help identify the smallest unit of nonverbal communication. In her free time, you may find her cross-country skiing or playing music in the park.

About us

We are the leading applied research & innovation consultancy

Our insights are leveraged by the most ambitious organizations

Image

I was blown away with their application and translation of behavioral science into practice. They took a very complex ecosystem and created a series of interventions using an innovative mix of the latest research and creative client co-creation. I was so impressed at the final product they created, which was hugely comprehensive despite the large scope of the client being of the world's most far-reaching and best known consumer brands. I'm excited to see what we can create together in the future.

Heather McKee

BEHAVIORAL SCIENTIST

GLOBAL COFFEEHOUSE CHAIN PROJECT

OUR CLIENT SUCCESS

$0M

Annual Revenue Increase

By launching a behavioral science practice at the core of the organization, we helped one of the largest insurers in North America realize $30M increase in annual revenue.

0%

Increase in Monthly Users

By redesigning North America's first national digital platform for mental health, we achieved a 52% lift in monthly users and an 83% improvement on clinical assessment.

0%

Reduction In Design Time

By designing a new process and getting buy-in from the C-Suite team, we helped one of the largest smartphone manufacturers in the world reduce software design time by 75%.

0%

Reduction in Client Drop-Off

By implementing targeted nudges based on proactive interventions, we reduced drop-off rates for 450,000 clients belonging to USA's oldest debt consolidation organizations by 46%

Read Next

Insight

The COM-B Model: How to Move from “Stay on the Shelf” to Dynamic Strategic Plans

Strategic planning often faces resistance, frequently becoming a formality rather than a practical tool. This article argues that a strong, adaptable strategic plan, especially one focusing on specific behavioral changes and necessary support, is key to organizational growth and impact, regardless of size, offering a better way forward.

Insight

Shaping the Future of Housing: Insights from Canadian Homeowners

How can developers, builders, and policymakers better align their strategies with Canadian homeowners' preferences for innovative housing solutions? This report explores the attitudes, awareness, and barriers that shape homeowner decisions around options like ADUs, middle housing, and modular homes, offering practical insights to guide future engagement and market alignment.

Notes illustration

Eager to learn about how behavioral science can help your organization?