The Behavioral Science Behind Spotify Wrapped’s Viral Success

It’s December once again, a most special time of year for many of us. We have so much to look forward to in just one month: beloved holidays like Christmas and Hanukkah, New Year’s Eve, and of course, the release of Spotify Wrapped. 

In case you’ve somehow missed the boat on this cultural phenomenon, Spotify Wrapped is an annual campaign where users of the music-streaming platform can view a dolled-up summary of their listening data over the past year, set to snippets of their favorite tracks. The 2021 version also includes new features such as personalized audio “auras” (a visual representation of one’s musical personality, akin to what would happen if you spilled some watercolor paint everywhere and took a blurry photo of the result), quizzes about the user’s own listening habits (“True or false: BTS was your most-binged artist”), and more. 

Since its inception, the campaign has become an event in its own right, and has proven very successful in promoting Spotify as a platform. In 2020, Spotify downloads increased by 21% in the first week of December, and 90 million people engaged with the campaign that year.Wrapped has been described as a “best-in-class marketing campaign,” won multiple Webby awards, and is constantly being used as a case study of how to market successfully in the digital age.2 The tweets below are only a drop in the tsunami of content produced about Wrapped each year.

All this success has not come without its share of backlash. Some have criticized the campaign for promoting the superficial broadcasting of taste rather than the actual enjoyment of music, while others claim it’s a great example of just how comfortable we’ve become with the surveillance of our personal behaviors.3,4

Still, millions of us go nuts for Spotify Wrapped. So why are we so obsessed with an annual reveal of our own listening data? Read on for some behavioral perspectives on the Spotify Wrapped hype, and what this phenomenon can tell us about consumer behavior in 2021.

1. It’s personalization done right

In the age of big data, personalization has quickly become a must-have for tech products. But it needs to be used carefully. Good personalization meaningfully improves the user’s experience in one way or another; bad personalization can annoy them, creep them out, or even make them feel they’ve been stereotyped. (TDL staff writer Preeti Kotamarthi has detailed the do’s and don’ts of personalization here.)

Spotify Wrapped is decidedly an example of personalization done right. It’s not throwing around data and metrics just for the heck of it; instead, it keeps the spotlight on the users themselves and focuses on the parts of the Spotify experience which users genuinely value.

For many of us, music is a big fixture of our self-concepts, and a core part of Spotify’s value as a service is that it provides a venue for identity exploration and discovery. Spotify Wrapped, then, is a very clever way of repackaging our peak experiences with the service and delivering them back to us (in an aesthetically pleasing package to boot). Ultimately, through Wrapped, Spotify is subtly reminding us of the specific ways we benefit from using the platform, and insinuating itself into the fond memories and associations we hold for our favorite music.

The other key reason that Spotify Wrapped works so well is that it validates our conceptions of ourselves as individuals. While most of us appreciate the various other examples of personalization on the platform, such as curated song recommendations, these don’t get nearly the same recognition as Spotify Wrapped. Why? Because Wrapped feels like a completely unique reflection of who we are. Features like auto-generated playlists are useful, but you can see the algorithmic thumbprints on them; they’re not just for you, per se. But nobody else on the planet will have the same Spotify Wrapped as you.

The moral of the story: people like to feel special and unique, and features that help them get there tend to be well received.

2. It turns music into a game

Gamification is the practice of “adding game-like elements to a non-game environment.” It’s a trend that’s permeated countless industries and organizations, and Spotify is no exception. 

Music has always had an element of competition to it — just think of all the self-proclaimed superfans and the archetypal gatekeepers who will challenge you to “name 5 of their albums” if you dare to casually enjoy their favorite band. Spotify has long been integrating this desire to signal superior taste into the architecture of their platform, and Wrapped is one of their key tools for doing so. By showing listeners how they stack up compared to others — for example, by awarding “top 1%” status to the biggest listeners of a particular artist — they create a sense of exclusivity, status, and leadership (although this has led some users to be more confused than pleased).

In addition to competing with others, Spotify Wrapped spurs users to compete with their past self, by comparing their year-on-year metrics. Through messages such as “You listened to 94% more minutes than last year — talk about overachieving!” Wrapped motivates users to engage with the platform even more in the future, to set new “personal bests.” 

Since 2020, Wrapped has achieved a whole new level of gamification by adding literal games, including the true-or-false quizzes mentioned above. All of these features are of course beautifully packaged in a way that is best suited to demonstrate your musical engagement to others.

3. It’s got an element of surprise

Since its inception, Spotify Wrapped has been released in early December. However, the exact release date is kept secret until the launch, which helps to build anticipation. Spotify users know they will be rewarded for using the platform throughout the year (with their own data), but not knowing exactly when it will come makes Wrapped’s arrival even more exciting.

Case in point: In the final week of November this year, tens of articles were released with some variation on the title “When does Spotify Wrapped come out?” (You may have even heard this question being asked amongst friends, while the one person who uses Apple Music stands in a corner and weeps.) Google searches for “Spotify Wrapped” also started to spike as we approached the end of November. 

Data courtesy of Google Trends

The second element of surprise comes from not knowing what your Wrapped will tell you about yourself. If you spent the past few weeks listening exclusively to Taylor Swift’s rerecording of RED or Adele’s 30, this may not apply to you, but for most listeners, Spotify Wrapped truly feels like receiving a gift whose contents are unknown. This is important: streaming platforms have previously given users access to data on their own listening stats, as well as charts cataloging the success of popular artists, but according to some researchers, surprising vs. unsurprising pleasures are not created equal. 

According to neuroscientist Gregory Berns, the brain’s pleasure centers react more strongly when pleasures are unexpected.5 So while platforms like iTunes (RIP) may have allowed us to briefly bask in our own exquisite musical taste, having that superior taste confirmed (or not) just once a year — with a lot of fanfare attached — adds extra fun.

4. All the cool kids are doing it

On the surface, Spotify Wrapped is all about us as individuals. But at its core, this campaign is a social exercise. Formatted to fit perfectly in one’s Instagram stories, this feature is designed to spur users to share. And share they did — so much so that Spotify Wrapped has come to feel like a bonafide holiday season tradition for many.

The widespread popularity of Spotify Wrapped gives rise to the bandwagon effect, our tendency to, well, hop on the bandwagon and do what everyone else is doing. The bandwagon effect explains why we succumb to online trends: our inherent need to “fit in” and feel like a part of the majority, coupled with our fear of missing out on the fun, makes it hard to resist sharing our Wrapped stats on social media when all our friends are doing so. This cycle is also self-perpetuating. The more we see others posting their reports, the more Spotify Wrapped becomes a social norm, which makes us feel even less inhibited from participating. (More on that below.)

Finally, as alluded to above, Spotify Wrapped also leverages social norms when it provides us with normative feedback on our behavior (for instance, your percentile ranking among listeners of your favorite artist). Becoming aware of these benchmarks could in turn influence how we act going forward.

5. It gives us an excuse to share

Disclosing important parts of ourselves is highly rewarding, and is the bedrock of our most cherished relationships — we all crave authentic connection with others, and we all want to be seen for who we “truly” are. But self-disclosure also comes with risks.6 Those who overshare (on social media or in the “real world”) can be perceived as “narcissistic, mundane, irritating, or problematic” by their peers.7

These risks often hold us back from sharing with others. Even though most of us love to talk about ourselves, our interests, and the precise number of times we’ve streamed “Butter” by BTS (undeniably their best English-language single), we often refrain in the interest of adhering to social norms.

But as we’ve established, the popularity of Spotify Wrapped has shifted these norms. While there may be some vocal haters of Spotify Wrapped, broadly speaking, it’s a very well-loved campaign, and its ubiquity has served as a kind of social cover: it gives us an excuse to publicly display important parts of ourselves while shielding us from judgment. (At least, from certain kinds of judgment.)

6. It plays to our love of narratives

Spotify Wrapped usually comes out at a time of significant change. The weather is getting colder, school semesters are drawing a close, and New Year’s Eve is right around the corner. In these times of significant change, we tend to slip into nostalgia. 

Nostalgia, defined as the sentimental recollection of the past, is a highly valuable psychological phenomenon. While it can feel bittersweet from time to time, psychological research has found that it is a positive emotion that improves our mood, helps us feel connected to others, and generally boosts our self-esteem. Music has proven to be a key vessel for nostalgia, especially if the songs are “autobiographically salient, arousing, [and] familiar.”8

Music and “main-character energy”

Throughout the year, and in our lives in general, we tend to view ourselves as the main character in an unfolding story. Psychologists call this our “narrative identity”: the idea that we build our sense of self by sewing together the details of our lives into a coherent story.9,10

More specifically, we use our reconstruction of the past, our present, and our imagined future to provide our lives with meaning and purpose. In a sense, we are all the authors of our own autobiographies. And, like any good story, our lives are filled with characters, events, settings, themes, and importantly, soundtracks. 

Spotify Wrapped ties nostalgia and narrative identity together quite eloquently. While our tastes, hobbies, and habits may greatly change over the years, the “mental time travel” we experience while listening to our favorite tunes from years past will always take us back to our emotional peaks and valleys. Reflecting on these nostalgic songs is comforting, as we can fondly remember that previous chapter of our lives and see how our story has evolved from there. 

In essence, Spotify Wrapped reaffirms the psychological conception of our lives as narratives, tying together the person you were at the start of the year with your current self. Listening to the soundtrack from previous episodes of our lives reminds us that we still have room for growth and new experiences in our unfolding hero’s journey. This can be a powerful salve for the anxiety we may feel in an unpredictable, changing world.

7. It builds excitement for the future

Finally, by launching at the end of the calendar year, Spotify Wrapped piggybacks on our anticipation of new beginnings, prompting us to think about our future.

Researchers have shown that temporal landmarks influence aspirational behavior: the “fresh-start effect” describes how individuals are more likely to pursue goals (such as following a diet or starting a gym routine) after some key date has passed. A potential explanation is that temporal landmarks distance us from the imperfections of our past selves and allow us to focus on improving our future selves.11,12

Just as it spurs us to look back, Spotify Wrapped also encourages us to look forward to the new year, imagining how we may want to change or do things differently. In this way, it may prompt us to set new goals. These may be related specifically to our listening habits (e.g. “I will listen to more educational podcasts”), but they could also extend to our wider aspirations (e.g. “I will go to the gym more often, and I’ll set up a great playlist to motivate myself”).

That’s a wrap

In an age where consumers are, quite simply, sick and tired of being advertised to,13 Spotify Wrapped is one of those rare marketing initiatives that manages not only to avoid drawing users’ ire, but to bring them genuine delight. It’s also bucked the trend of mounting backlash to corporate mining of consumer data,14 suggesting (for better or for worse) that users are still willing to accept surveillance in exchange for something they value. That may be an ego boost, social connection, the chance to reflect and marinate in nostalgia, or a combination of all of these.

That said, only time will tell how long the Spotify Wrapped craze can continue. This December, five years after the very first Wrapped hit our devices, one could detect a hint of weariness in social media users’ reactions to the latest iteration of the campaign. Perhaps most notable: the text that accompanied each screen of Wrapped 2021 was trying just a little too hard to emulate teen internet-speak (“You always understood the assignment”; “You deserve a playlist as long as your skincare routine”), and it rubbed people the wrong way. Even The New York Times demanded to know when “Spotify Wrapped [got] so chatty.

Things move fast on the Internet, and not even a cultural phenomenon as massive as Spotify Wrapped is guaranteed longevity. As more and more companies try to recreate the success of Wrapped, we will soon see whether the public becomes disillusioned or bored. Will other companies reach the heights of Spotify Wrapped, or will consumers move on before they get the chance? 

We can’t say for sure, but for now, we wish a very merry Wrapped season to all who celebrate, and we hope you were pleased with your results.

Katie MacIntosh takes sole responsibility for all BTS references in this article.

The Dangers of an Artificially Intelligent Future

There can be no doubt that we’ve entered the latest revolutionary period in human history, The Technological Revolution. This new era promises efficiency, convenience, communication, equal access to information, and unrivaled prosperity, but at what cost? 

It is easy to be dazzled by the technological prowess of modern smartphones, self-driving cars, or VR gaming, and forget that these machines were built by humans, in all their irrational and illogical glory. Programmers become choice architects: they have the power to shape the contexts in which people make decisions, and thus those decisions themselves.1 The designers of these technologies, however, are susceptible to the same unconscious biases and prejudices as the rest of us, and the resulting technologies—AI and machine learning algorithms in particular—threaten to exacerbate social inequalities by encoding our human biases and proliferating them on a massive scale. 

Machine learning, explained

Machine learning (ML) algorithms may not be the great predictors we believe them to be. They merely replicate society as it is and was, rather than predict what it could be—or, more importantly, what we would like it to be. 

Algorithms seem quite complex, and in some cases they are, but they are hardly beyond human comprehension. In fact, we use predictive algorithms in our heads hundreds of times each day. What should I eat for dinner today? Perhaps I could stop by the supermarket on my way home from work and pick up some vegetables to go with the leftover salmon I have from yesterday. That would be the most cost- and time-effective option, and that’s the solution to a predictive algorithm you’ve computed in your head. 

When we run these calculations in our heads, we draw from our lived experience and our learning history to inform our decisions. Machine learning algorithms, meanwhile, make choices based on what they’ve learned from datasets fed to them by their developers. If you regularly surf the internet, then you will be familiar with reCAPTCHA, a security feature that asks users to (for example) select all images containing traffic lights. This is a basic image processing machine learning algorithm: Google uses your responses to train its AI and make it better at image recognition. 

Datasets used to program more advanced machine learning algorithms include collections of human faces for facial recognition software, information about successful employees for application screening software, and locations of police arrests for predictive policing software. So, how intelligent is our artificially “intelligent” future? 

How algorithms learn our prejudices

Joy Buolamwini, a graduate researcher at MIT, drew attention to the issue of algorithmic discrimination when she unveiled a discovery she had made while working with facial recognition software. In her 2016 TED Talk “How I’m fighting bias in algorithms,” which has over 1.2 million views at the time of writing,2 Buolamwini describes a university project she undertook titled “Aspire Mirror,” where she attempted to project digital masks onto her reflection. She ran into trouble when the facial recognition software she was using failed to recognize her face—until she put on a white mask. 

Facial recognition systems are ML algorithms trained by large data sets. The algorithm identifies, collects, and evaluates facial characteristics and compares them to existing images in the database. If you have ever, like me, tried and failed to apply one of TikTok or Instagram’s facial filters to one of your pets, this is because the dataset used to train the facial recognition software contains only human faces and is unfamiliar with animal facial characteristics. 

The problem is that facial recognition algorithms are overwhelmingly trained using datasets tainted by sampling bias. In a 2018 study entitled “Gender Shades,” it was found that two facial analysis benchmarks were overwhelmingly composed of lighter-skinned individuals (79.6% for IJB-A and 86.2% for Adience).3 The study also revealed that dark-skinned females are the most misclassified group, with error rates of up to 34.7% (compared to 0.8% for white males). Similar results have been found for Amazon’s facial recognition software Rekognition.4

It is no surprise, therefore, that the facial recognition software was failing for Joy, a black woman. The algorithm was failing to recognize Joy’s face as it contained very few examples of faces like hers—an issue that ethnic minorities, particularly women of color, are accustomed to. As the prominence of machine learning algorithms creeps quietly into every aspect of our lives, it is imperative that we illuminate the hidden faces in our datasets to eliminate existing bias. 

Algorithmic bias and hiring

Automated tools to screen job applicants are another example of machine learning algorithms that are often based on biased datasets. These algorithms are gaining traction due to their time- and cost-saving benefits. The technology claims to scan resumes for buzzwords that are associated with desirable candidates.5 The problem is, these algorithms are trained on datasets of the company’s existing, successful employees.

It is no new revelation that the corporate world is already awash with gender, racial, and disability discrimination, and this only increases as you climb the corporate ladder. A landmark study in this area, entitled “Are Emily and Greg More Employable than Lakisha and Jamal?”, famously found that resumes with African American names on them received 50% fewer callbacks than identical resumes with white-sounding names attached.6 This existing racial bias in hiring practices is contained in the datasets of current employees and can easily be encoded into an ML algorithm. The algorithm simply learns by example, and our society does not set the best example. 

Take gender bias, for example. If a woman applies for a job at a firm using an algorithm in their hiring process, she is more likely to be rejected than her equally qualified male counterparts, because the algorithm has learned (from its biased dataset) that men are more likely to be successful employees. It believes this because there tend to be fewer women in high-powered corporate positions than there are men. In fact, this is precisely what happened at Amazon when they tried to introduce AI into its recruiting process; its algorithm reportedly learned (among other things) to penalize candidates whose resumes included the word “women’s” (as in “women’s chess club”) and who had attended all-women colleges or universities. AI is oblivious to this inequity, and will only serve to crystallize these existing biases on large scales as hiring technology gains dominance over human intervention. 

How algorithms create reality

Machine learning and AI are increasingly held up as tools to predict the future. But if they are not used carefully, there is a danger that they will actually just end up creating the conditions for the same patterns to continue, often at the expense of groups that are already vulnerable. 

A prime example of this effect can be seen in predictive policing software such as PredPol and Compstat, which boast of their ability to predict crime hotspots that vary by time of day. This allows police forces to deploy officers to where they are needed most, to intervene more efficiently when a crime occurs. Seems like a noble agenda—so what’s the catch? Well, these algorithms are trained using datasets on the number of arrests in given areas. As in the examples above, this means that the algorithm becomes a reflection of our present socio-political environment—one where police are most often deployed to underprivileged areas inhabited by black and minority groups. This creates a pernicious feedback loop: data from policing is used to justify additional police presence, which in turn leads to more arrests.7 Because of examples like these, the mathematician and data scientist Cathy O’Neil has dubbed certain machine learning algorithms “weapons of math destruction” (WMDs).

The issue here is that there is a massive data gap between crime in poor areas compared to middle-class and wealthy ones, due to the lack of policing in the latter areas. These algorithms are also using arrests as a proxy for crime, but many arrests do not end in a conviction. This is an example of Goodhart’s Law: when a measure becomes a target, it ceases to be a good measure. If finding crime is the true goal of predictive policing, then focusing on arrest numbers is a poor substitute for cases where police were actually able to reduce or avert harm because of their ability to intervene quickly.

A better future with AI

It is clear that AI and ML have the ability to damage social equality and democracy, but could they also provide the solution to what we fear? For example, if current bias is correctly accounted for, could hiring technology actually serve to improve equality in the workplace? Very likely—as long as people take action to ensure it.

Making algorithms more equitable

In a famous study titled “Orchestrating Impartiality,” experimenters compared the results of “blind” orchestra auditions, where judges could not see applicants while they performed, with those of the existing face-to-face audition model.8 Since the blind audition model was introduced, the number of female musicians has increased fivefold, revealing an unsavory dose of gender bias in former orchestra hiring processes. 

Technology has the ability to remove biases, but only if our algorithms are trained to do so. For instance, hiring algorithms could be trained to look solely at an applicant’s relevant experience rather than potentially bias-inducing variables such as name, race, gender, zip code, or whether or not they went to an Ivy League university. In this way, AI could offer its own version of blind auditioning. For example, Amazon, following revelations about its discriminatory hiring algorithm, recalibrated its AI to make it neutral towards terms like “women’s chess club” (though critics said this was still not enough, and Amazon later scrapped the tool entirely).

This approach could be applied in hiring processes and beyond to remove existing biases, demonstrating that AI has immense power to uplift and protect our society if it is used conscientiously. 

Being mindful about machine learning

In the words of Eckhart Tolle, “Awareness is the greatest agent for change.” Greater awareness regarding how these algorithms are used, how they impact us, and where to go for help is imperative for moving forward in a world of AI.

Joy Buolamwini and Cathy O’Neil are both founders of organizations that seek to reduce algorithmic discrimination, known respectively as The Algorithmic Justice League and ORCAA. ORCAA is a consultancy firm that can be hired to evaluate whether an organization’s use of algorithms is ethical and in line with the company’s values. ORCAA’s aim is to incorporate and address concerns from all the stakeholders of an algorithm, not just those who built or deployed it. They also assist in the remediation of fairness issues, distribute certifications of fairness (where applicable), and assist with education regarding algorithmic fairness.

It’s also worth noting that in many cases, such as hiring, it is in the best interest of a company to evaluate applicants fairly. If not, they can easily miss out on the applicants that could bring exceptional talent and experience to their business. In cases like these, an external audit is an excellent way to verify the equitability of their automated application screening tool.

Regulating AI

It is clear that the opacity of ML algorithms has allowed some companies to use them to the detriment of society. In a more artificially intelligent future, we will require publicly aware definitions of fairness to be referenced and upheld by a court of law. Algorithms should be rigorously tested prior to their release, and companies should have to demonstrate compliance with anti-discrimination laws. 

Already, regulators and lawmakers in the US are in the process of developing standards for algorithmic auditing, including translating existing fairness laws into rules for algorithm builders.7 Algorithmic fairness testing could follow an FDA-like approval process, where the burden of proof of efficacy lies with the parties developing the algorithms. There should also be disclosure as to when these algorithms are being used to evaluate a person and what the algorithm’s desired outcome is. This allows people to question its results and seek legal assistance if required. 

However, this still leaves some questions. For instance, who should be the one to enforce laws surrounding machine learning algorithms? Can we trust corporations to have our best interests at heart and be transparent about the inner workings of their algorithms? Do we trust governments to pass and effectively enforce strong regulations in this area? Even as we move towards a system for regulating algorithms, it remains to be seen how it will operate in practice. These questions should be answered with the input of the public, and of experts on equity and AI. 

Getting to the root of the problem

As with most solutions, education is also an imperative here. Choice architects (or programmers in this case) carry a burden of responsibility when creating algorithms with the ability to change the society we live in. Training programmers in algorithmic fairness with the help of behavioral science is also a crucial next step. We urgently need to act on issues regarding algorithmic fairness as AI continues to permeate the various levels of our society. 

Conclusion

In the words of American author William Gibson, “The future is here, it’s just not evenly distributed”—meaning, the future is here, but not all groups are fully reaping its benefits. AI is still in its infancy, and in many cases, it merely replicates the world as it is or has been, replete with biased and discriminatory practices. Encoded bias not only features in facial recognition software, hiring, and policing, but it also appears in college applications processes, health and automobile insurance, decisions about creditworthiness, and more. It is already shaping the society we live in! 

Have we not learned from the 2008 Wall Street Crash that we shouldn’t wait for disaster to strike before we regulate modern financial instruments, or in this case, modern technologies? If not, the future AI is promising us may merely be a reflection of our past mistakes. Who wields the power of our future? Governments? Or is it a handful of corporations controlling AI whose current main focus is their bottom line? 

It is up to us to educate ourselves regarding issues of algorithmic fairness. It is up to us to lobby for better regulation of the ethicality of machine learning algorithms. It is up to us to seek legal help when wronged by these algorithms to set legal precedents that benefit the rest of society. We have the opportunity to design a society based on equality and acceptance by weaving these values into our technology. This article warns of the dangers we face if we continue on our current trajectory, the dangers of an unregulated artificially “intelligent” future.