Why do we treat our in-group better than we do our out-group?
In-group Bias, explained.
What is In-group Bias?
In-group bias (also known as in-group favoritism) is the tendency for people to give preferential treatment to others who belong to the same group that they do. This bias shows up even when people are put into groups randomly, making group membership effectively meaningless.
Where this bias occurs
Let’s say you’re a football fan, and you root for the New England Patriots. At work, you have a couple of coworkers who are also into football: John, who is also a Patriots fan, and Julie, who supports the Philadelphia Eagles. You’re much closer with John than you are with Julie, even though you and Julie actually have more in common (beyond sports preferences) than you do with John. Your work friendships are an example of the in-group bias at work.
Debias Your Organization
Most of us work & live in environments that aren’t optimized for solid decision-making. We work with organizations of all kinds to identify sources of cognitive bias & develop tailored solutions.
In-group bias can harm our relationships with people who don’t happen to belong to the same group that we do. Our tendency to favor in-group members can lead us to treat outsiders unfairly, or perceive the same exact behaviors very differently depending on whether they were by in-group or out-group members. We might even feel justified in committing immoral or dishonest actions, so long as they benefit our group.1
In-group bias is a huge component of prejudice and discrimination, leading people to extend extra privileges to people in their own in-group while denying that same courtesy to outsiders. This prejudice creates unequal outcomes for marginalized groups. In the legal system, for example, judges and juries might favor defendants who are the same race, gender, or religion as them, and rule unfairly against those who are not.2
How it affects product
To no surprise, we prefer to purchase products that in-group members use—and even look down on products that out-group members advocate. For instance, if all of our friends are flaunting the newest smartphone, we might be inclined to upgrade to help us feel like we’re fitting in, and further distinguish ourselves from those with the outdated version.
Advertising agencies leverage this in-group to their advantage, deliberately choosing actors and mottos they know will resonate with a specific clientele. For instance, an ad for a small car dealership may choose actors resembling locals and emphasize their small-town values. Or an eco-friendly cosmetic brand may choose an environmental activist to model their products, rather than a celebrity notorious for traversing on private jets.
In fact, brand loyalty often establishes an “us versus them” mentality all on its own. We tend to strongly identify with our material possessions, viewing them as an extension of ourselves. For this reason, we often feel like we belong in a group with others who own the same products as us, even if the connection is almost meaningless. We’ve all had the experience of being on public transportation and complimenting someone’s shoes because we own the same brand. We feel an affinity towards them simply because we choose the same products as us, therefore making them like us.
In-group bias and AI
Just as with anything other technological innovation, the way that in-group members regard AI determines how willing we are to adopt AI into our lives. For instance, if you work at a start-up with young, tech-savvy employees actively incorporating machine learning into project pipelines, you’ll probably be eager to try using ChatGPT for researching new assignments. Meanwhile, if you work at a well-established company with corporate leaders loyal to traditional procedures, you might be skeptical about how AI is dominating the workplace and less likely to give it a try.
Machine learning can also amplify in-group biases. Datasets that train AI software are usually skewed towards one group or against another—after all, they are made by humans. This means that the algorithm may adapt these prejudices itself, while maintaining the facade of objectivity through its machine exterior. With this in mind, it is essential to stay skeptical of the results AI generates for you – they might be more in favor of one side than you think.
Why it happens
We all like to think we are fair, reasonable people. Most of us feel confident that we (unlike others) are free from prejudice and that we see and treat other people equally. However, over the years, research on in-group bias has shown that group membership subconsciously affects all of our perceptions on a very basic level—even if people are sorted into groups based on totally meaningless criteria.
One classic study illustrating the power of this bias comes from the psychologists Michael Billig and Henri Tajfel. In a 1973 experiment, participants started by looking at pairs of paintings and determining which one they preferred. At this point, some participants were told that they’d been assigned to a specific group based on their painting preferences, while others were told they were assigned to a group by a random coin toss. (As a control, other participants weren’t told anything about being in a group and were merely assigned a code number.)
After this, each participant went into a cubicle, where they awarded real money to other participants by marking it down in a booklet. The other participants were listed by code number to conceal their identity; however, the code number indicated which of the two groups the participants were assigned to.
The researchers intentionally designed this study so that they could tease apart the possible causes of in-group bias. Would people be more generous to their group members even when the groups were random? Or would this affect only appear when the groups were based on painting preference because participants felt that they had something in common with their group mates?
The results displayed that people gave more money to members of their in-group regardless of how that group was formed in the first place. In other words, people were more generous to their in-groups, even when a coin toss had assigned them.3 Experiments that follow this same basic outline, known as the minimal group paradigm (MGP), have been repeated time and time again with the same results. No matter what, favoritism never seems to depend on any meaningful connection besides being in the same group.
But in-group bias goes beyond kindness to our in-group; it can also spill over into harm toward the out-group. Another famous study illustrating in-group bias is the Robbers Cave study, conducted by Muzafer Sherif. In this experiment, 22 eleven-year-old boys attended a mock summer camp and were divided into two teams: the Eagles and the Rattlers. The teams were kept separate, and only interacted when competing in various activities.
The two teams showed increasing hostility towards each other, which eventually escalated into full-out violence (what some called a “real-life Lord of the Flies”).9,16 Although there were a number of ethical issues plaguing the experiment, including a harsh environment that may have made the boys more anxious and aggressive than they would have been otherwise,10 Sherif’s study is still seen as a frightening demonstration of how group identity alone can become the foundation for conflict.
Another troubling finding is that the prejudice sparked by in-group bias materializes up in humans from a very early age. Children as young as three show favoritism for their in-group, and research in slightly older children (ages five to eight) found that kids showed this bias regardless of whether their group had been assigned randomly or meaningfully—just like adults.5
Group memberships form part of our identities
There are a few theories of why in-group bias happens, but the most prominent is known as social identity theory, proposed by Tajfel and his colleagues. This approach is founded on a basic fact about people: we love to categorize things, including ourselves. The way we conceptualize our identities depends on the social categories we belong to. These categories could involve pretty much any attribute—for example, gender, nationality, and political affiliation are all categories we place ourselves into. Not all of these categories are equally important, but they all contribute to our idea about who we are and our role in society.6 Categorization processes also compel us to sort people into one group or another.
Another basic truth about people: we have a need to feel positive about ourselves, and we are frequently overly optimistic about how we rank compared to others. Our desire for self-enhancement guides our categorizations to rely on stereotypes that favor our in-group and demean the out-group. In short, since our identities heavily rely on the groups we belong to, a simple way to enhance our image of ourselves is by giving a shiny veneer of goodness to our in-group—and doing the opposite for our out-group.4
Research that supports social identity theory has found that low self-esteem is linked to negative attitudes about people belonging to out-groups. In one Polish study, participants completed several questionnaires, including one on self-esteem, one on collective narcissism, one on in-group satisfaction, and one on hostility towards out-groups. (Collective narcissism and in-group satisfaction both involve holding positive opinions about the group that one belongs to, while collective narcissism depends on it. Meanwhile, in-group satisfaction means that belonging to a group is not as central to someone’s identity.)
The results showed that self-esteem was positively correlated with in-group satisfaction and negatively correlated with collective narcissism. Put another way, for people with low self-esteem, group membership was more likely to be a central fixture of their identity. Low self-esteem was also linked with out-group derogation.7 Taken together, these results suggest that people with low self-esteem feel a more urgent need to elevate their own group above others because a larger slice of their identity depends on their belief that their group is better.
We expect reciprocity from others
Billig and Tajfel’s social identity theory is the commonly accepted explanation for in-group bias. However, some researchers have argued that Billig and Tajfel’s research didn’t account for an important social norm: reciprocity, which prompts us to repay others’ kindnesses.
In one experiment, Yamagishi et al. (1998) replicated one of Billig and Tajfel’s original MGP studies, with one modification: some of the participants were paid a fixed amount by the experimenter, rather than receiving money that had been awarded to them by other participants. This made it clear to these participants that the decisions they made about how to allocate money would have no bearing on the rewards they themselves received at the end of the experiment. As the researchers predicted, this group did not show any evidence of in-group bias: they divided up their money equally between in-group and out-group members.8
These results contradict previous conclusions that in-group bias arises from merely belonging to a group. Rather than springing up automatically wherever a group is formed, it might be the case that group favoritism only happens when people have the expectation that their good deeds will be repaid by their group members. In other words, having an in-group to belong to gives rise to “group heuristics”—the expectation of reciprocity from in-group members, but not necessarily out-group members.
Why it is important
Like all cognitive biases, in-group bias happens without us realizing it. Although we may believe that we are being fair and reasonable in our judgments of other people, in-group bias demonstrates that we may not be as charitable to outsiders as we are to people more “like us.” When it comes to judgments we make about other ethnic groups, in-group bias fuels ethnocentrism: the tendency to use our own culture as a frame of reference through which to evaluate other people. This narrow lens usually results in seeing other cultures as less rather than simply different.
In-group bias has serious, real-world consequences, particularly for people belonging to marginalized groups (whether based on ethnicity, gender, religion, or whatever else). In the legal system, for example, an in-group bias towards one’s own ethnic group can influence a judge’s decision of whether or not to detain a suspect.2
In-group bias can also lead us to be more lenient than we necessarily should be towards in-group members who have done something wrong. In one study, researchers found that people who scored high on measures of modern racism were quick to excuse bad behavior committed by a European American and to praise them for their virtues. When it came to similar behavior perpetrated by an African American person, however, they were not so kind.11 As this study demonstrates, in-group bias can prevent us from holding in-group members accountable for their own behavior.
This bias also has unfortunate moral implications for our own decision-making. Research has found that people are more willing to lie or cheat in order to benefit their in-group, sometimes even when they themselves don’t stand to gain anything from this dishonesty.1 Our favoritism for our own group is apparently so strong that many of us will bend our morals for the sake of the tribe. This can obviously lead to some bad choices, especially for people who are lacking in self-esteem and are particularly desperate to gain the approval of their peers.
How to avoid it
In-group bias is very difficult to completely overcome because it sneakily operates below the surface of our consciousness. That being said, behavioral research points to some tactics that might help to reduce in-group bias.
Capitalize on people’s self-interest
While it sounds counterintuitive, some researchers have tried to exploit people’s self-interest in order to reduce their in-group bias. One study compared two games, known as the dictator game (DG) and the ultimatum game (UG). In both games, players decide how to split a sum of money between themselves and a recipient. In the DG, once the deciding player has made a decision, the recipient has no choice but to go along with it. However, in the UG, the recipient can choose to either accept or reject the first player’s offer. If they reject it, neither player receives anything.
In this study, participants played either the DG or the UG, and were told that their partner (who didn’t actually exist) either shared their view on abortion or held the opposite view. When participants were playing the DG, they showed significant in-group bias, offering more money to in-group members than out-group members. However, in the UG, this bias disappeared entirely.12 These results reveal that concrete incentives to treat people equally might be an effective strategy to reduce in-group bias.
Try a little teamwork
Remember the Robbers Cave study, where boys were separated into teams and pitted against each other? After making arch nemeses out of the Eagles and the Rattlers, Sherif and his colleagues were able to reduce the hostility between the two teams by forcing them to cooperate with each other. To achieve this, the researchers artificially cut off the camp’s drinking water supply and told the boys that they would all have to work together to fix it. (It was the 1950s, so you were allowed to put children in a forest and deprive them of water, for science.) Through this exercise, along with a few others giving the two teams a shared goal, they eventually got on friendly terms.
More recent evidence has supported the idea that encouraging cooperation between groups can reduce in-group bias. By interacting with an out-group, our categorizations of others can expand to include out-group members in a new, superordinate group identity. And even though Sherif theorized it was key for both groups to share a common fate, recent research suggests that this isn’t the case: interacting with one another is enough.13 Wherever possible, then, trying to encourage cooperation between groups is a useful strategy.
How it all started
In-group bias has probably been shaping human history for as long as we’ve been around, but it wasn’t until 1906 that it became an object of academic curiosity. The concept was introduced by the American sociologist William Sumner, known for his work on folkways (social norms specific to a given society or culture). Sumner believed that ethnocentrism (and the in-group bias underlying it) was universal among humans.4
In the second half of the twentieth century, social psychology started to gain steam as the world struggled to make sense of World War II and the Holocaust. The topic of intergroup relations, and why people could be so irrationally biased against people who weren’t like them, was a major area of interest (as it still is today).
In the 1960s, Sherif, of Robbers Cave fame, worked with his wife Carolyn to develop realistic conflict theory: an approach that posits group conflict arises from competition over resources. Later, in the 1970s, Billig and Tajfel developed the minimal group paradigm, and Tajfel coined social identity theory (along with another psychologist, John Turner).
Example 1 – 2008 presidential election
In the lead-up to the 2008 U.S. presidential election, there were two frontrunners for the democratic nomination: Barack Obama and Hillary Clinton. As it turns out, Democrats’ allegiances to a given candidate were sometimes enough to inspire in-group bias.
Researchers recruited Democrats to play the dictator game, where they decided how much of a pool of money they would share with an anonymous partner. The participants indicated whether they preferred Obama or Clinton, and learned whether their partner either agreed or disagreed. The researchers repeated this experiment three times: first, in June 2008, right after Clinton’s concession speech; next, in early August, before the start of the Democratic National Convention (DNC); and finally, in late August, after the DNC had ended.
The results revealed that, in the first two experiments, men showed significant in-group bias, giving significantly more money to partners who shared their choice of candidate. (Surprisingly, this bias was not found in women.) However, this difference disappeared in the third experiment after the DNC. But why?
The authors of the paper wrote that the 2008 primary season had been a particularly bitter one, and there had been worries in the Democratic Party that spurned Clinton supporters would break from the party and vote Republican. So the goal at the DNC was to foster a broader group identity among Democrats by leveraging support for Obama. The fact that the authors found reduced in-group bias after the DNC makes sense, given that national polls also found a large increase in Obama support among Hillary supporters after the convention.14
Example 2 – Sports fans and in-group bias
It’s no secret that sports fans take their allegiances seriously, so it’s not surprising that people show in-group bias for fellow supporters of their own team. In one study, researchers had participants fill out a number of surveys right as they were leaving a basketball game. These surveys gauged how invested the participants were in their team, and had them rate the behavior of fans of both teams during the game. The results showed that the spectators were biased to favor their in-group, especially when their team lost. For those who identified strongly with their team, this effect was strongest when the game they had just watched was a home game, as their environment encouraged an even stronger in-group bias.15
What it is
In-group bias is the tendency for us to give preferential treatment to members of our own group while neglecting or actively harming out-groups.
Why it happens
The main explanation for in-group bias is social identity theory, which posits that membership in various groups comprises a large part of our identities. We need to feel positive about ourselves, and by favorably comparing our groups to others, we enhance our own self-concept.
Other theories include realistic conflict theory, which says that groups get into conflict when they compete for resources, as well as group heuristics, which says that we are nicer to in-group members only because we expect reciprocity from them.
Example 1 – Democrats and in-group bias in the 2008 election
In the lead-up to the 2008 U.S. presidential election, male Democratic voters expressed significant in-group bias, favoring people who shared their choice of candidate and penalizing others. This bias vanished after the DNC, which prioritized fostering a shared Democratic identity.
Example 2 – Sports fan and in-group bias
Spectators at a basketball game who heavily invested in their team showed in-group bias when rating the behavior of fans of both teams. This effect was strongest when a team lost or after a home game.
How to avoid it
In-group bias is notoriously difficult to completely avoid, but research shows we can reduce it by interacting with other groups, as well as by giving people incentives to act in an unbiased manner.
Related TDL articles
Unfortunately, the in-group bias is only one of many pushing us toward those like ourselves and away from those who are different. Others include egocentric bias, ethnocentrism, stereotypes, and several more. Read this article to learn more about how all these factors interact with each other, as well as the groundbreaking research that led to their discovery.
Historically, men have occupied a greater number of managerial and director positions. According to the in-group bias, people tend to hire those who are similar to them, which leads to male executives tending to promote other men to senior roles. As our writers Melissa Wheeler and Victor Sojo analyze, this phenomenon makes it incredibly difficult for women to gain access to executive and CEO positions. Wheeler and Sojo provide various strategies for combating this bias, as well as methods for ensuring equal treatment in the promotion process.