Social Media and Moral Outrage

On October 3, 2021, a former product manager at Facebook leaked internal documents from the company to the Wall Street Journal. The documents revealed new information about the 2018 change to the Facebook newsfeed algorithm that boosted Meaningful Social Interactions (or MSIs) between friends and family.2 At the time, the change was generally heralded as a positive one, expected to increase engagement and improve user well-being. Internal memos reveal that, in fact, the opposite happened: the overhauled algorithm “rewarded outrage” and incentivized sensationalism. 

How did this seemingly well-intended change backfire? Why would prioritizing posts from those we’re closest to result in such a catastrophe of hate, anger, and negativity? The MAD model of moral contagion offers a behavioral perspective that can help us answer these questions. 

The evolutionary roots of moral outrage

Research has found that content is often circulated more widely on social media if it is “moralized,” meaning that it “references ideas, objects, or events typically construed in terms of the interests or good of a unit larger than the individual.”3 In general, emotional content is more likely to be shared online, but when it comes to politics and news stories, negative emotions are particularly effective at increasing a piece of content’s reach.”4,5,6 Political news framed in terms of morality7 and tweets containing moral-emotional words tend to propagate more on social media. In comparison, posts with only moral or emotional words do not enjoy such engagement.8

Our habit of reposting emotional content most likely has evolutionary roots. Humans share emotional stories as a means of building social bonds. It is hypothesized that sharing contributes to collective action by helping create a perception of similarity between people, facilitating emotional coordination and aligning our views of the world.9 

But why our propensity for moral outrage specifically? In part, we may have evolved to gravitate towards this kind of content because moral outrage can act as a signal of our social identity, values, and ideals to others in the group (as well as to ourselves).10 Highlighting wrong or immoral behavior is a powerful way for us to maintain or enhance our reputation in a particular social circle.  

Getting MAD: Why moral outrage spreads online

Taken together, moral and emotional expression has evolutionary roots in that it helps build social bonds, elevate one’s reputation, and signal one’s own identity, morality, and values. 

Unfortunately, when moral-emotional content spreads, it may act as an antecedent to political polarization, encouraging the circulation of political news within political identity boundaries.11,12 Moral outrage finds encouragement within these boundaries.13 It has the potential to feed into major political upheaval, and potentially to create such a deep divide between different group identities that extreme actions (such as violence) come to be seen as acceptable.14 

The MAD model of moral contagion was developed by Yale researchers Molly Crockett, Jay Bavel and William Brady to explain why moralized content spreads so fast on social media. According to their foundational paper, “The MAD model posits that people have group-identity-based motivations to share moral-emotional content, that such content is especially likely to capture our attention, and that the design of social-media platforms amplifies our natural motivational and cognitive tendencies to spread such content.”15

Let’s break down each of these dimensions, starting with motivations.

Group identity–based motivations for moral outrage

Humans are social creatures: we thrive in groups. In prehistoric times, this required us to build trust and good relationships with those who helped us during collective or individual distress. At the same time, it was crucial to be alert to threats from rival groups, lest we get killed or lose our valuable resources. We developed shortcuts to figure out who to trust and who to not. 

We live with these same evolutionary instincts in the 21st century: those who we perceive to share our values and views are considered our ingroup, while those who we perceive to disagree with us are considered our outgroup. 

When group identity is easily noticeable, as it often is on social media, we tend to shift from self-focused motivations to group-focused ones. Our attitudes, emotions, and behaviors start to be influenced more by evaluations made along the lines of this group identity, rather than individual goals.16 We tend to engage in actions that distinguish the ingroup from the outgroup, to reinforce our belongingness to our ingroup and display our affirmation of their values and morals—especially when threats emerge.17

Consider the events around the #MeToo campaign, which went viral in 2017. As more and more people started posting about their experience with sexual harassment and abuse, certain groups felt more threatened than others. Some reacted defensively, suggesting that all men were being punished for misdeeds of a few. These posts often framed the issue as a question of being on separate “sides”: women were the outgroup, and men were the ingroup. In some cases, this devolved into hostility towards #MeToo supporters and those who shared their stories publicly (ingroup: people who disapproved of change to the status quo, outgroup: people who supported change).18

Research shows that bashing on the outgroup and expressing animosity towards “them” on social media is far more effective at driving engagement than posts that merely express support for the ingroup.19 Moral-emotional posts expressing such animosity enjoy a bigger reach, especially since social media algorithms are designed to further promote content that is performing well on engagement metrics.

Ads on social media can have the unintended consequence of further entrenching these group identities. Ad delivery algorithms seem to “effectively differentiate the price of reaching a user based on their … political alignment… inhibiting political campaigns’ ability to reach voters with diverse political views.” In other words, it is cheaper for an entity to reach an ingroup audience than it is an outgroup audience.20 For an entity acting on a small budget, this could mean that they would rather allocate a significant proportion of their budget on reaching the ingroup, thereby contributing to the political polarization of the general populace. 

On the other hand, it also seems that exposure to outgroup views online can strengthen a person’s ingroup beliefs.21 U.S. researchers have found that, after Republican study participants were exposed to Democratic viewpoints online, they expressed more conservative attitudes. (The same trend was also seen in Democratic participants, but the effect for this group was smaller and not statistically significant.) So the answer to this problem is not simply to expose users to more diverse viewpoints; in some cases, this may backfire and exacerbate polarization.

Taken together, evidence points to a generally widening gap in the general populace, with social media aggravating this polarization, contrary to their general motto of bringing people together and being a stage for meaningful conversation.22 Social media seems to be amplifying the divide and is enabling a lack of shared reality between opposing groups, by promoting content based on group identities.

Let’s move on to the second part of the MAD model of Moral Contagion.

Attention and moral outrage

The social media business is modeled around the concept of the attention economy. In such an economy, human attention is deemed a scarce resource that can be harvested for profit. In a bid to do just that, social media algorithms are designed to promote content that engages people for as long as possible, encouraging them to spend more time online than they may have intended. This creates more opportunities to show users paid ads, as well as more data that can be used to optimize targeting algorithms and increase revenue.

The expanse of data these companies have stored over time and their computational powers have been harnessed with a singular aim to exploit human attentional resources in every way possible. These algorithms are essentially amoral: they have no sense of right or wrong, and are only sensitive to what works to maximize attention. As a result, algorithmic biases overlap with human biases to serve up more negativity and moral outrage. 

Emotions and morals take center stage in any political discussion or event. Moral and emotional words that capture more attention in laboratory settings were indeed found to be associated with greater sharing when they appeared in posts on social media.23 Every moral and emotional word in a tweet is associated with an average increase of 20% in its diffusion (sharing).24 

Analyzing bad behavior allows us to judge people and their character. Extreme and negative evaluations are indeed attention-grabbing.25 Interested parties have historically been able to game our sensitivity towards moral contagion to rile up political action through mainstream media such as radio, newspapers, and television. Social media has opened this possibility up to the masses, and has accelerated the speed and scope of outrage marketing. 

No entity in the history of the world has ever had the kind of power social media wields on our collective attention. In past, prosocial online campaigns like #MeToo and #BlackLivesMatter captured human attention through our bias towards morality and contributed to meaningful offline activities like coordinated protests and policy changes. Other online trends have had anti-social effects. The QAnon conspiracy theory and the Capitol Hill riots are two examples of the harmful consequences of moral contagion online. 

Design and moral outrage

Finally, let’s discuss the third leg of the MAD model of moral contagion. Social media is designed to appeal to our System 1 brain. It demands quick actions from the users in terms of viewing, reacting to, and writing a post, and building relationships online. It piggybacks on our understanding of words like “share,” “love,” “like,” and “friends” to help us feel comfortable with an entirely new way of building relations, a process that has historically required a lot of face-to-face physical communication.

Social media has removed significant friction from processes like sharing opinions, debating, calling out injustices—and at the same time, making it easier for people to express animosity, antipathy, and hate towards people or entities. Face-to-face interaction invokes a sense of empathy in us humans. This helps us become aware of how our comments or actions may be received on the other end: anticipating how we might feel in their shoes often keeps us from disrespectfully expressing our discontent.26 Research also shows that interacting through speech is more likely to positively influence our evaluation of someone, as compared to text-based interactions. Text, however, is how most of the social interactions take place on social media.27

People are more likely to rely on their emotions when forced to quickly make a moral decision.28 They’re also more likely to react quickly when they think along the lines of their moral beliefs and values.29 This further strips away friction, making the expression of moral outrage way easier in the online realm than in the offline. 

Thanks to algorithmic recommendations, people easily find themselves in echo chambers where their outrage, especially against an outgroup, may be well received, encouraged even. The reduction of humans to two-dimensional icons also allows us to be readily vocal about punishing a wrong-doer. 

Furthermore, the always-on nature of the internet and its 24-hour services around the world mean that moral outrage is no longer constrained by time or place. A person doesn’t need to be physically present or be a local to express outrage at something. Indeed, people consume more information about immoral actions online compared to offline.30

Moral outrage and social media: A case study

Knowing what we know now about the MAD model of moral contagion, we may be able to see why the MSI initiative failed to deliver on its promise of a better user experience. Facebook believed that the dominance of video and professionally produced posts were turning people into passive consumers of content. Time spent on the site was on the rise, but people were not actively engaging with the content. Finding a way to increase interaction with people in one’s friends list would encourage an active response to the content they were scrolling through, as well as their wellbeing. 

Where the MSI system went wrong was in the way it scored content. It skewed the algorithm towards posts that evoked (or seemed to evoke) emotional reactions, which then got reshared, attracted long comments (as is often the case in internet flame wars), and were likely to circulate widely within particular communities. 

Consider a post using moral-emotional language in a political context. This post is likely to evoke the author’s political identity, and readers are likely to react depending on whether the poster belongs to an ingroup or an outgroup. Even more passive users might still use the react buttons (“haha,” “anger,” “love,” etc.) to signal an emotional response.31 

With each such reaction, the MSI algorithm adds 5 points to the quotient that decides how the post will be prioritized. The comments would more likely be associated with people having politically extreme views.32 They might support or go against the view being expressed (if somehow the post does find its way outside of the political-ideological boundaries.) They might also contribute to the widening of the perception gap between groups. 

Given that people on social media are more likely to be friends with people who share similar tastes, and that the algorithm demotes posts from non-members and strangers, people are even less likely to see a post from an outgroup. On the one hand, there are some upsides to this, as online interaction with those who disagree can be frustrating or upsetting.33 But it also means that these groups remain consistently separated on social media, and this propagates a lack of shared reality. This post containing moral-emotional language is likely to become an instance of moral contagion, thanks to the MSI algorithm. 

Final words

Social media algorithms prioritize the spread of content that has proven to be popular—irrespective of what that content actually is—for the sake of monetizing this engagement. Successful content is often crafted to provoke moral outrage, for which humans have natural, group identity–based motivations to share. 

It is when this kind of online activity leads to offline consequences (like discrimination, intimidation, or even violence) that we must begin to question the stances that tech giants often take on free speech. Do we proactively moderate online content, at the expense of making some concessions on our rights to free speech? Or do we address the algorithm and the business model underneath it all, that allows for the amplification of potentially hateful speech in the first place?

For better or for worse, tech platforms have become part of the social fabric of the contemporary world. They have a responsibility to take action, as they exert a major influence on the path humanity takes collectively.

Read Next


The “Social Dilemma” Dilemma

Netflix's social media docudrama has made "dark tech" a hot topic. How can behavioral scientists balance ethics with persuasive design?