How Cognitive Bias Can Sabotage Your Resolutions

In light of the new year, a lot of us find ourselves exploring novel habits, routines, and hobbies to invest our time and effort into in 2021. Whether it’s eating healthier and going to the gym more, or learning a new language to exercise our brains, these goals require some level of planning, especially if we want to be effective and efficient. We need to start thinking long term, and move away from improvising what to do at the dawn of each new day. 

The issue is, planning for the future doesn’t come naturally to us. We’re all familiar with different versions of procrastination and the myriad vices associated with it: video games, social media, Netflix, you name it. We’re very aware of these distractions but still find ourselves falling victim to them. When we feel a burst of hope and tell ourselves that tomorrow will be different, we actively try our best to start fresh and vow never to fall into the same trap. But slowly, we trickle back down to what seems like our natural state: laziness. 

It’s really disheartening, and as someone who’s been frustrated by this for a long time, I wanted to find answers. We each have biological and environmental differences, so how is it possible that we all share this seemingly ingrained flaw? And how can we strengthen ourselves so that we make better decisions?

Stick with me as we work through the historical and psychological explanations for these phenomena, which will give us the necessary insight into how we can increase our productivity, make better choices, and build a happy and successful life for ourselves.

So let’s get started. What does history teach us about the origins of our inability to make effective decisions?

The ancestral origins of our planning problems

I came across a possible answer after reading Sapiens, a beautiful book by Yuval Noah Harari that details the rich history of the human species.

The early foraging and hunter-gatherer societies engaged in a day-to-day method of survival. When their families needed food, men would go out to hunt game or collect vegetation. Since they had to migrate as their prey moved, they didn’t have any concept of a long-term reserve. 

Climate shifts were also a motivation to keep them from settling down, as it was much safer to seek a new shelter than to fortify their current abode. Other than preparing for the winter, people didn’t collect food or belongings, since they viewed it as a burden to have more things to carry around. It slowed them down. And in the wilderness where predators come from all directions, this was a matter of life or death. Some societies went as far as killing off members of the group, like the elderly or sick, if they slowed them down. There was no real need to plan for the future, so they centered their life around the present.

The domestication of the human race changed everything. When these earlier societies discovered the potential of agriculture, a brand-new way of life emerged, one where settling down was finally a possibility. They couldn’t just live day-to-day anymore, moving as the weather changed, their game shifted, or predators approached. Despite the odds, they had to choose a single plot of land and survive. They quickly replaced their previous hunting efforts, establishing community reserves and developing homes. This brought dramatic changes to society as their sudden need to plan for the future prompted a new way of life.

Here, for the first time (on the communal, not individual level), humans began to plan ahead.

So yeah, we as humans don’t seem to have the whole “planning for our futures” thing ingrained in our genes. This is alarming though, seeing that it’s practically the center of our existence nowadays. What college will I go to? What job do I want to have? What do I need to do so that I can be happy? We aren’t spending our days searching for food as often as we spend them thinking about our futures. 

So in the context of history, it’s quite comforting to know that our ancestors didn’t arrive prepared to plan for the future, much less excel at it. I don’t say this so that we become complacent, sitting by instead of taking control of our futures. It’s just good to know that we’re not alone in this struggle. The advent of agriculture clearly shows us how society had to shift its focus towards the future. But this prompts the question: How can we explain our individual ineffectiveness at planning for the future?

There are two cognitive biases that are useful for understanding our issues with planning: future-self continuity, and hyperbolic discounting.

Future-self continuity

Image for post

Psychology enlightens us with a deeper look into the way we think about the future. We know from numerous studies that instead of being hardwired to maximize our long-term rewards, we resort to short-term gratifications. In practice, this is self-evident. But where does this come from and how can we get better at working against it? The answers lie in a psychological concept known as future-self continuity.

If I told you to think about yourself 5 years from now, who exactly are you imagining?

This one’s a little hard to grasp, so let’s start with a thought experiment. If I told you to think about yourself 5 years from now, who exactly are you imagining? 

The answer I’m looking for isn’t a description of that imagined being, whether they’re taller, skinnier, or happier. What I’m really asking is this: do you believe that this imagined version of yourself is actually you? Do you treat the person you want to become (i.e. you in the future) as an aged version of yourself? Or does it feel more like thinking about a stranger, purely a figment of your imagination? 

According to Hal Hershfield, if you feel like this “future self” is still you, this leads us to say you have a high level of future-self continuity: your current self in the present is continuous with your future self. On the contrary, if you feel like this thought experiment conjures up a person that feels foreign and similar to a stranger, then we’d say you have a high level of future-self discontinuity.1

Don’t start to worry if you fall into the latter category. As humans, we intrinsically gravitate towards future-self discontinuity, where we treat the future not as part of the journey, but more like an alternate reality. This may seem like a jumble of fake deep questions to ask, but psychological studies show us that the way we think about our futures has a lot to do with the issues we’re struggling with. Some thinkers have gone as far as to suggest that when people don’t have a strong sense of future-self continuity, they’re just as inclined to reward a stranger as they are to reward their future self.2 

One study by researchers at Stanford University discusses how, when people don’t have a strong sense of future-self continuity, they’re just as inclined to reward a stranger as they are to reward their future self.2 

Think about that for a second. Because we’re unable to feel continuous with our imagined reality in the future, we’re equally as likely to give our time, energy, and money to a complete stranger as we are to invest those same resources in our future selves.

So improving our future-self continuity can lead to improvements in our decision making such that we optimize our future success.

So what does this mean for us? The degree to which we feel connected, in the present, to our future self, dictates whether we ensure the well-being of that future self.1 So improving our future-self continuity can lead to improvements in our decision-making such that we optimize our future success. If you still think this is a bunch of useless information, let’s go a level deeper and see what happens if we don’t strengthen our sense of future-self continuity.

Hyperbolic discounting

Image for post

A lack of strong future-self continuity then leads us to what’s known as hyperbolic discounting.

Imagine you’re given the option to take a $5 bill right now or wait 10 minutes and get $10. That seems like a no-brainer. You’d much rather wait 10 minutes to double your profit since it is only 10 minutes.

Instead, consider you’re asked to wait a month to get the $10. Which one seems more appealing? $5 in your pocket right now or $10 after 30 days? Researchers who conducted this simple experiment found that people of all ages—from children to older adults—tend to take the $5 right now.3

What’s observed here is that as the time it takes to receive a reward increases, the value we attribute to that reward is “discounted.” So getting an objectively greater reward in the future seems almost less valuable when compared to being rewarded right now, even though we know the current reward is objectively smaller. This explains our natural tendency towards short term gratification because the reward of eating a slice of pizza right now is much greater than the reward of becoming a Gymshark athlete many years from now.

The reward in the long term is next to nothing in our heads.

Combining what we’ve learned about future-self continuity and hyperbolic discounting, it’s really easy to see why we slack on building our futures. Working on a long-term project tends to get outweighed by enjoying ourselves right now because the long-term reward seems almost insignificant. Furthermore, we reduce our “future self” to just a figment of our imagination, leaving us unable to seriously consider our futures as being a part of our life trajectory. Anything we do for our “future self” now feels like we’re wasting our resources on a stranger, so why not use those resources to enjoy ourselves right now?

We see now that it’s a big deal that leads to so many of the issues we have today with focus and productivity. We’re struggling to find the motivation to build our futures. We’re desperately trying new techniques, routines, and habits to alleviate this dilemma. But don’t feel hopeless. Now that we understand these 2 psychological phenomena, let’s break down how we can flip the switch and work towards a better future.

Back to the future

Image for post

The diagram above gives us a theoretical goal to work towards. We need to start merging these two seemingly independent bubbles in our minds.

We now understand that a strong sense of continuity between our present selves and our future selves leads to higher levels of self-regulation and behaviors that maximize the long-term return. The diagram above gives us a theoretical goal to work towards. We need to start merging these two seemingly independent bubbles in our minds. The more that the “future you” feels like a direct extension of who you are now, the more effective you’ll be at prioritizing, scheduling, and growing as an individual, and thereby fighting the curve of hyperbolic discounting.

You 2.0

I wrote an article detailing how there are two lists that are essential for becoming a better version of ourselves:

  1. A list of the qualities and traits the 2.0 Version of yourself will embody
  2. A list of the projects or organizations you’re involved in, paired with how each is helping you become the person you described in list 1.4

For a more detailed walkthrough of this process, refer to the article here. Having this discretely outlined is crucial because without this, how will you even recognize your future self, let alone try and develop a sense of continuity with him/her?

How language can bias our thinking

Our sense of continuity isn’t a conscious thought process; rather, it’s implicitly shaped by our experiences, cultures, and languages. An interesting study by researchers at Yale University highlights the impact of language on our sense of future-self continuity.5

Let’s think about two languages, English and German, and consider the way they grammatically differentiate between the present and the future. English speakers know that in order to speak about an event tomorrow or a week from now, they add “will” or “going to” in a sentence.

“I will go to the store tomorrow.”

“I am going to fast next week.”

Languages like English are said to have strong future tenses because you can’t use present tense verbs to speak about future events. We don’t say “It rains tomorrow.”

Now consider German, which does exactly that. I’m most definitely not a native speaker of German, but according to the study, “Morgen regnet es” translates to “It rains tomorrow,” and that’s completely valid grammatically. German is a language with a weak future tense because it’s not required to add a word to differentiate a future event from a present one.

Keith Chen, who was the lead researcher for this experiment, found that speakers of languages that don’t differentiate between the present and future have higher retirement savings and make better health decisions.

What Chen found in this study is truly astounding. Languages that have strong future tenses (like English) led their speakers to engage in activities that prioritize their CURRENT selves. On the contrary, speakers of languages with weak future tenses (like German) were more likely to act in self-regulated, future-oriented ways. What seems like trivial differences in language amounts to a huge shift in the way we perceive our future self. Chen found that speakers of languages that don’t differentiate between the present and future have higher retirement savings and make better health decisions.

Although this study only shows correlation and not causation, let’s explore why this could be the case. English makes us feel like the future is its own entity, that it’s inherently distinct from the present. With that, we compartmentalize the things that are going to occur or are planned to happen in the future within their own bubble of things to address. 

Conversely, languages like German make the future seem more immediate, as if it’s part of our current existence. It makes us believe that there isn’t much of a divide between our future self and our present self. This in turn leads to behaviors that maximize our “current” success and reward, which is really our overall success. The difference lies in our boundaries between the present and future.6

If you want to read more on this and how culture plays a part, check out this article.

Instead of using speech like “I want to be a healthy person,” say “I am a healthy person.”

Here we learned that language has a significant impact on the way we distinguish between the present and the future. Something as minute as changing the way we speak about ourselves can have a huge impact on our self-image. Choosing to embrace our future identities as who we are now, as opposed to speaking of from a 3rd-person point of view, can help us integrate who we want to become into who we are now. 

Instead of using speech like “I want to be a healthy person,” say “I am a healthy person.” Adopt the characteristics of your future self into who you are now and it’ll help shape your decisions almost immediately. If you use the latter phrasing and believe you are NOW a healthy person, your decisions will reflect that. You’ll make choices that are in line with the identity you outline for yourself. This can lead to an improved sense of future-self continuity, as you’re graying the boundaries between the present and future. You’re adopting the future you into the current you. You’re developing continuity.

If you’re wanting to read more on this, ToDoist actually has a great article that details Future-Self Continuity and how we can practically combat it. My aim in this article was to simply give a basis for the next 2 pieces, so definitely check this out for a deeper dive.

Conclusion

The goal of this essay was to give us the foundations we need in understanding why it is we slack in long-term planning, and what we can do to improve. Future-self continuity tells us that we should actively keep our future at the forefront of our minds, constantly striving to embody our 2.0 Identity. Hyperbolic discounting teaches us the consequences of losing sight of this long term vision, giving us an explanation for our tendencies to procrastinate.

How Implicit Biases Complicate Female Mentorship

“Do you even want a career?” my Chair pointedly asked me, after I (Yasmine) revealed I was pregnant and asked for an unpaid extended maternity leave. This question came after my Chair had explained to me she had put her baby in daycare at six weeks, and that 25 years later, said child was doing just fine. As a 29-year-old, early-career, tenure-track professor looking for mentorship and support, this was not one of my favorite conversations. I felt guilty for being pregnant, and that my concerns about returning to work after only six weeks were entirely dismissed. Years later, it worked out: I got tenure without even pausing the clock. However, I always wanted to go back and say, “You know, I could have really looked up to you and admired you; you could have helped me develop as a female academic and a female leader, but instead you made me doubt myself.” 

– –

I (Kim) had just returned from a very austere Army combat deployment to Afghanistan. Having spent the last three years in “trailblazing” roles, I wanted to start a family. I reached out to a mentor whose early career mirrored mine. She stated she was “surprised” at how “selfish” I was being, as my personal goals did not align with the professional expectations she had for me. She said I should be above all the “traditional” female roles, and that “my cannon was my child and my rifle was my husband.” I could not believe that a female mentor, who I presumed would understand my perspective, could make me feel like an absolute failure of a woman. It was not until seven years later that I reflected on the damage that conversation caused. I would not marry for another eight years, and to this day still have to overcome tremendous guilt for having personal desires that do not align with the expectations of others. 

Both of these anecdotes are just that—anecdotes. Yet, they illustrate missed opportunities for meaningful mentorship between female professionals. In this series of articles, we explore why the female leader-subordinate relationship may have unique tension, and how this relationship can be improved with a renewed emphasis on mentorship.

The changing nature of gender bias

We are fortunate that, in 21st-century America, explicit bias against women has been greatly reduced. However, despite female advancement in the workforce, leadership is still disproportionately male in an array of fields including politics, business, religious institutions, legal professions, and academia.1 The consequences of gender inequality in leadership can include women having less power in decision-making, reduced access to other opportunities, and lower compensation. On an organizational level, a lack of female leadership can result in less diversity and fewer female role models for up-and-coming leaders. 

On a societal level, less exposure to female leaders means that our prototype of a leader can be viewed as disproportionately “male.” In other words, the more we are exposed to male leaders, the more we are inclined to see leadership as inherently “masculine.” For example, we may believe that a leader requires “level-headedness” and agency (being authoritative and decisive),  and that women are “too emotional” and lack these agentic qualities. What can result is a bias against female leaders that is more often implicit—that is, without conscious awareness. 

Research has demonstrated how implicit biases can influence perceptions and behaviors. In an analysis of 321 introductions for speakers at a medical conference, men only used the professional title 49.2% of the time for their female colleagues, versus 72.4% of the time for other males, revealing an implicit bias.2 Other research suggests that we are more likely to attribute a female leader’s successes to outside factors, such as luck or the simplicity of the task, while her failures are seen as a reflection of incompetence.3 Similarly, male leaders gain more perceived leadership ability when their company succeeds, but are also less likely to lose legitimacy when their company fails.4 

These and other implicit biases can further influence how women perceive themselves. For example, whereas men are socialized to be confident, assertive, and self-promoting, females are socialized to diminish and undervalue their professional skills and achievements.1 This reluctance to self-promote, despite the benefits, stems from concerns about the perceptions of others.5 For women, the adoption of agentic behaviors, such as self-promotion, can result in backlash.6 For example, an experimental study revealed that, even if they have the exact same profile, female politicians are more likely to be perceived as “power-seeking” than their male counterparts. Furthermore, her perceived “power-seeking” makes her less likely to garner votes.7 This can implicitly influence us to view women, in general, as less qualified for leadership roles than men.6 

Women are also prone to gender bias

However, here’s the interesting part. These biases are not only held by men; women are also prone to implicit biases against women. A recent study, for example, reported that the word “male” had a stronger implicit association with “brilliance” than “female,” for both men and women.8 In another experiment, male and female science faculty were equally likely to favor applications for a laboratory manager position that had been randomly assigned a male name, rating the applicant as more competent and hireable than the identical female applicant. On average, the application with the male name was offered a higher starting salary and more mentoring than the same application with a female name.9 

Implicit biases: The subordinate perspective

Implicit beliefs might bias women against female leaders. In an American Bar Association survey, a majority of female lawyers under 40 expressed a preference for male bosses.10 Another study reported that female subordinates had a greater negative bias towards female supervision than male subordinates.11 In 2016, a study by Artz and Taengnoi additionally found that in two U.S. datasets, female job satisfaction is lower under female supervision, while male job satisfaction is unaffected by the gender of the boss.The dispreference for female leadership is stronger when the leader is older, and when she adopts more “male-like” authoritarian leadership styles.11,12

These results are somewhat surprising given that social identity theory predicts that women, seeing themselves as belonging to the group of “women,” will want to help maintain their own positive social identity by having their group viewed favorably. In other words, another woman doing good makes us all look good, right? So, given this, why might women react with more hostility towards a successful female? 

One answer is derived from social comparison theory. This theory suggests that because women are more likely to identify with other women, when they see a successful woman, they view her as a threat. In other words, they compare themselves against her and, subsequently, feel bad about themselves, thus diminishing their own self-esteem. To reduce the ego-deflating consequences of comparing themselves to this successful woman, the subordinate might cast her as interpersonally hostile and unlikeable.13

Another possibility is that perhaps subordinates have expectations in line with their biases, which can lead to greater disappointment. For example, perhaps subordinates expect their female leaders to be more nurturing or empathetic than their male leaders. Thus, when the female leader doesn’t behave as expected, the subordinate might be more disappointed.10 Using the anecdote from above, did Yasmine expect her Chair to be extra empathetic, and when she wasn’t, feel disappointed in a way she never would have been had her Chair been a man?

Implicit biases: The leader perspective

Female leaders can also be biased against female subordinates. A workplace environment in which women are a minority in leadership can create competitive pressure in which they feel they have to prove themselves. This pressure was even greater a generation ago, when resources were less accessible to women than to men.14 A study by Buchanan et al. (2012) describes how women at the top might have been successful because they convinced men that they are not like other women. Relatedly, in order to assimilate into the male-dominated work environment, women may dissociate from their gender identity and distance themselves from other women. For example, consider this female U.S. Army officer’s experience with distancing from her female cohorts: 

I was the only woman in my unit for the longest time. To show the guys I was “cool,” I chewed tobacco, drove a truck, and power-lifted after work. All was well, until a second female joined the unit. You think I would feel some camaraderie. But for some reason, I had to make a point to show everyone that I was faster than her. Stronger than her. I just felt I had to highlight my “masculine” traits while downplaying her “girly” ones. She never did anything to me. In fact she looked up to me as someone who outranked her, and my first piece of advice to her was to put away her mascara and stop trying to entice the soldiers. She never scheduled another meeting with me again. Why did I feel like I had to make a point to be “different” than her?

Why might this woman have hesitated to support her subordinate? This experience illustrates what Belle Derks and colleagues (2016) refer to as the Queen Bee Phenomenon. Queen Bees make a point of distancing themselves from other women, but more specifically women who are their juniors. Successful female managers who went through the trials of navigating and rising through the ranks may feel that their juniors must similarly prove their worth, as comparatively they have not been tested in the same way.15,16 

Changing the dynamic

In a qualitative study of women who made it to the top of their academic medical career, a specific action recommended by these successful women was to find a good mentor or sponsor.17 Having women in leadership positions can also inspire other females.18 Of additional benefit: exposure to positive senior female role models may reduce the implicit biases that women may hold. For example, Young et al. (2013) found that when college women had a female professor they viewed as a role model, their implicit attitudes about women in science shifted; the exposure to a positive female leader made women automatically associate science with females more easily.19

Unfortunately, as we reviewed, building stronger female mentorship may not be so easy. First, in many fields, there is still a dearth of female leaders. A history of workplace inequality can create pressures for a leader to become a “Queen Bee,” or a subordinate to have a preference for male bosses. These pressures will likely dissipate as more women ascend into leadership positions. In the meantime, we encourage a renewed emphasis on mentorship between female professionals. In our next article, we will discuss ways for leaders and subordinates alike to overcome these barriers and change the dynamic of the mentor-mentee relationship.

‘Tis the Season: The Science of Saying Thanks

Needless to say, 2020 has been hard for many employees. The global challenges created by the COVID-19 pandemic have forced workers to adapt to new conditions, and take on extra responsibilities during the crisis. The holidays are usually a time for employers to show their appreciation for their employees’ efforts—but this year, the usual options aren’t on the table. Christmas parties will be virtual, and year-end vacations will be more isolated as fewer families are able to travel to see loved ones. 

Yet workers are in need of thanks—now more than ever. Recent research shows that 83% of organizations suffer from a recognition deficiency, with 87% of recognition programs based on tenure, not performance.1

In light of this year’s hardships, there are significant benefits to expressing gratitude, and behavioral science may inform how we do so in this uniquely challenging time. 

The history of gratitude 

Robert Emmons, a psychology professor at UC Davis and an expert on gratitude, states that it is the “affirmation of goodness. We affirm that there are good things in the world, gifts and benefits we’ve received.”2

The historical roots of gratitude are rooted in history. The word “thank” derives from the Latin word tongēre, meaning “I will remember what you have done for me.”3

The thank-you note goes back to Ancient Egypt and Rome. In Egypt, a common practice was composing letters of thanks to the deceased for what they had done during their lives. The Vindolanda tablets, a collection of handwritten documents from Roman Britain, show letters of thanks from soldiers hoping to get promotions.4

The benefits of gratitude

It’s no accident that humans have been expressing gratitude for such a long time: psychological research over decades has turned up many positive benefits of being grateful. Those who express gratitude are more satisfied both personally and at work.5

While there are benefits of gratitude for personal well-being, research also demonstrates benefits that expressing thanks has for others. In a job interview setting, researchers found that people perceived interviewees to be more friendly and informal when they expressed more gratitude.6

Similarly, leaders who express gratitude are seen to be less selfish, and their followers had higher commitment to their organization outside of their required tasks.7 Another study found leaders to be perceived as more benevolent and to have more integrity when they expressed thankfulness.8 Among teams, teammates that expressed gratitude were more sensitive to each other and more receptive to negative feedback.9,10

Gratitude and greatness

Expressions of gratitude don’t just improve team dynamics; they can also yield better results. In one study, researchers tested the effects of gratitude on team performance. 43 NICU teams, each consisting of two nurses and two physicians, participated in an acute care simulation. Afterwards, the groups received an expression of gratitude from mothers, other physicians, or both.

The researchers found that the mothers’ gratitude most positively improved team performance, and was linked to increased information sharing between team members.11 These results show that those who receive service (e.g. patients, customers) can help organizational teams work better together simply through showing gratitude.

Seeing is believing

Even just witnessing an act of gratitude can boost positive behaviors. In one study, researchers asked participants to read a movie review draft and underline eye-catching paragraphs. Before doing so, however, they were given an example review that had supposedly been marked up by a previous participant. In the example review, in addition to underlining passages, the past participant had also corrected typos. For some participants, these corrections were accompanied by a note of gratitude from the reviewer, thanking the participant for catching their mistakes.

The researchers found that those who had read the example with the thank-you note included were more likely to help correct errors, and more likely to say they wanted to become friends with the reviewer. Interestingly, these participants were also more interested in being friends with the person receiving the gratitude.12

Clearly, the benefit of expressing gratitude has personal and prosocial benefits. The benefits are so great that the Great Place to Work Institute, which identifies the best companies to work for and helps companies improve their corporate cultures, lists “showing appreciation and recognition” as one of their nine criteria for making it to the Fortune 100 “Best Places to Work” list.1

Why gratitude makes us feel good

A major reason why gratitude carries these benefits is due to the way we evolved as a species. The theory of reciprocal altruism, introduced by Robert Trivers in 1971, states that gratitude helps us regulate our response to altruistic acts by others and motivates us also to respond altruistically.

Trivers suggests that gratitude, like many other biases and social norms, emerged through natural selection. In the ancient past, individuals had a much higher chance of survival if they could depend on others to help them. When we feel gratitude after others have helped us, it inspires us to return the favor, which in turn motivates them to lend us a hand again in the future. Many of our primate relatives are known to show reciprocal altrusim as well: chimpanzees are more likely to help other chimps who have given them assistance in the past.13

The benefits of gratitude can even be seen at the neurochemical level. Neuroscience research suggests that expressing gratitude increases dopamine and serotonin, and activates the ventromedial prefrontal cortex. After a while, our brains start to crave the experience of giving and receiving thanks.1,14,15

Are there better ways to say “thanks”?

Research shows the positive benefits of expressing gratitude on follower behavior, team cooperation, and personal well-being. But what is the best way to go about doing so? Science has some interesting insights into the best way to say thanks.

Thank individuals for their responsiveness

A recent study from the University of Toronto found respondents had a more positive response when they were thanked for their responsiveness (e.g., “I don’t know what I would do if I didn’t have you there today”) as opposed to the costs they incurred (e.g., “Thank you for sacrificing time for me”). Research shows that acknowledging the cost puts individuals in “exchange mode” and emphasizes an impending reciprocation instead of making the beneficiary feel good.16,17

Don’t overthink it

Researchers from the University of Texas ran a study on writing thank-you notes, and found an imbalance between how writers believed their messages would be received and how they actually were received. Recipients found the letters both warmer and more articulate than the writers predicted.18

Allow others to express gratitude

Given the personal benefits of expressing gratitude, take time to let your employees do so themselves. This might entail giving employees paid time off to volunteer, or letting them have a say in corporate giving initiatives. One study by the Benevity, a software company based in Calgary, Alberta, found that there was a 57% drop in turnover among employees who were engaging in both volunteering their time & donating money.19

Think outside the box in saying “thanks”

While typical corporate holiday events aren’t likely to happen this year, there are still ways to say thanks. For instance, Moneypenny, an answering service company based in the United Kingdom, is sending each of its employees a turkey (or vegetarian option) to celebrate the holidays this year.20

Gratitude is literally one of the few things that can measurably change peoples’ lives.”

Robert Emmons21

While saying “thank you” might be an etched-in mannerism, by doing so, we improve our well-being and team performance at work. During a particularly challenging year, organizations may have to use unconventional methods this holiday season to ensure their employees feel recognized. No matter the method, the effort of recognition has scientifically proven benefits and is needed now more than ever before. 

The Behavioral Science Guide to Gift Giving

“The Gift of The Magi” is my all-time favorite short story. Written by O. Henry, it tells the story of a young lady, Della, and her husband Jim. Della wants to buy a good gift for her husband, but she is short of money. So she visits a hairdresser, who cuts her long locks of beautiful hair and pays her $20 in return. She uses the money to buy an expensive gold chain for husband’s favorite pocket watch.

When Jim comes home that evening, she gives him the chain and admits to selling her hair in order to be able to afford it. In return, Jim gifts her a set of ornamental combs for her once-long hair and admits to selling his pocket watch to get money for the combs. In other words, both of their gifts are of no use to the recipients—and yet, they don’t complain, because the incident demonstrates how much they love each other.

My other favorite thing to do is to create contemporary versions of classics. So, here’s “The Gift of Magi: Reloaded.”

This is the story of young Della and her husband Jim. Della wants to buy a good gift for her husband. After a few hours of browsing for inspiration through Pinterest and Instagram and reading through listicles with titles such as “50 things to get for your boyfriend this holiday season,” she decides to give him the latest PS4 game, Call of Duty: Black Ops Cold War.

To be able to afford the overpriced game, she decides to get rid of the FitBit she had been gifted the previous Christmas. Given its unused, brand-new status, she manages to sell it for a handsome amount on eBay and proceeds to buy the game. In the evening, when Jim returns home, she excitedly hands him a copy of the the new Call of Duty. He informs her that he has upgraded his PS4 to the shiny, new PS5, making the game unusable. Then he proceeds to gift her a FitBit premium subscription, which he thought would go well with his gift from the previous year.

That Christmas, Jim plays his PS5 for 24 hours straight. Della researches online for ways to cancel Fitbit subscriptions.

What’s my point? Buying gifts is hard, and we need science to help us.

The science of gift giving

Gift giving is an important social custom that has many layers to it. It is a representative microcosm of many social constructs—identity, social norms, similarity, obligatory rituals, reciprocity, and so on. A gift giver has several objectives: to satisfy the recipient, to signal their own status, to represent the status of the relationship, and so on. In addition to balancing all this, add into the mix the paradox of choice—the feeling of paralysis that arises when we’re faced with too many options. Gift giving is an art, but it calls for science, too.

If you are struggling to cross those last few names off your Christmas list, might I complicate your life just a little bit with a few more factors you should consider? Behavioral science can offer us a few evidence-based tips on how to pick the right gift for somebody.

1. What does your gift say about you? And about the recipient? 

In his iconic paper “The Social Psychology of Gifting,” the psychologist Barry Schwartz explores the idea of the gift as an identity marker.1 It is a common thesis that gifts represent theidentity of the giver: we give gifts that force others to form a certain image of us. A rich person might find joy in giving conspicuous gifts, while a book lover judges gift givers on the genre or the quality of book they choose to give.

The lesser-explored idea is that of the gift representing the identity of the gift receiver. Schwartz gives the example of parents imposing their vision on children with the selection of gifts such as a science kit or a Barbie doll. The gift then becomes a subtle way of telegraphing to the recipient the identity that others expect from them.

So, when giving someone a gift, make sure you’re not sending them a message they don’t want to hear: your present carries information about how you see the recipient, and how they should see you.

2. Did you tell yourself “I love this, so I am sure he will love it as well”?

A gift is often seen as a representation of the similarity between the giver and the recipient. An interesting paper by Elizabeth Dunn and team explores this theme further.2 In a series of experiments, participants were led to believe that either an acquaintance of the opposite sex or a romantic partner had gifted the participant either a desirable or an undesirable gift. After receiving the gift, participants rated how similar they thought they were to the person who gave it to them.

The results showed that after receiving an undesirable gift, men were likely to rate themselves as less similar to the gift-giver. Men even went on to report a negative outlook for the relationship because of this perceived dissimilarity. (Women’s ratings of similarity weren’t significantly affected by the gift they received.)

So, don’t risk buying a gift for someone because you like it, under the assumption that they’ll like it too. This is especially true if the person you are giving a gift to is male.

3. Did you ask the recipient what they wanted? 

As much as we love surprises, science suggests thinking twice before springing one on someone you love. Gino Francesca and Francis Flynn have studied gift registries to dig deeper into this dynamic.3 In a series of experiments, participants were required to choose from a preselected set of gifts. The paper concluded that gift recipients were more likely to appreciate a gift when it was something they had explicitly requested. Meanwhile, gift givers (falsely) assumed that an unsolicited gift would be considered to be more thoughtful by the recipient.

Planning a surprise? Think about letting the recipient in on the secret!

4. How much did you spend on your gift?

And finally, how much is too much—and how little is too little? Another paper by Flynn and colleagues found an interesting dichotomy between how recipients and givers viewed the cost of a gift.4 Gift givers expect a positive correlation between what they spend and how much the recipients will love the gift. And the gift recipients? They don’t care about the monetary value!

That might have just saved you a lot of money. Like they say, it’s the thought that counts!

Finding the perfect gift

Ok, so clearly, Della and Jim need a better framework to choose gifts for each other. After having gone through this one time too many, I have decided to use a consultant’s approach to gift buying. For those who are still crossing things off their lists, here’s an easy primer to gift giving:

behavioral science of gift giving

Category 1: The Gamble Gift

These are gifts that the recipient has dreamt of for some time. They have made lists of features, they have seen Youtube videos comparing their options, they have mooned over unboxing videos. They know exactly what they want—but then you, after hearing them talk about the item in question for so long, decide to buy one for them.

Now, caveat emptor: this could work out really well, but it could also strongly backfire. How does that happen?

  1. Well, it’s perfect when the recipient gets exactly what they wanted. The best way to make this happen is to keep your ears open for all kinds of hints.
  2. But, it’s not so perfect when the gift giver ends up giving a less preferred brand. This would earn some resentment, because now the recipient feels they cannot waste their own money by buying the one they actually wanted.

Category 2: The Grocery List Gifts

You would not believe the number of times this gifting happens, sometimes even unknowingly. Getting gifts that people consider as “have-to-buy” items is the worst. The only way to get out of this trap is to peek into other people’s shopping carts before paying. If it’s lying in a family shopping cart at a grocery store, don’t gift it.

Category 3: The Recycling Gifts

When you gift something that the recipient does not want and would never pay for, assume you will get this gift returned back to you (or regifted to someone else) in a few months. There’s no verbal cues for this; just the silent reprisal of a gift that never deserved to be a gift. This can be the outcome of assuming that the recipient will like your gift as much as you do—or just not putting any thought into its selection.

Category 4: The Perfect Gifts

The trick is to find things that people want to own, but would feel guilty buying for themselves. When they get it as a gift, it’s perfect: they got it without spending money on it. And this list is narrow but has a large scope: Amazon’s Alexa. A nice passport cover. Concert tickets. One of those beautiful notebooks. Fun beer glasses. Quirky coffee mugs.

You know the pattern now. Think of all things around you that you got as gifts but didn’t throw away. That’s it. That’s the perfect gift. They lurk around forever. You cannot get rid of them, because you wanted them—you just couldn’t justify obtaining it for yourself.

So there you go. Behavioral science to the rescue once again.

By the way, in case you want to thank me, I have been ogling projectors for a while on the internet. Just saying, I could see myself begrudgingly accepting this perfect gift.

Heuristics May Bias Judicial Decision-Making

“Are judges highly skilled mechanics who make rational and logical decisions? Or are they intuitive hunch-makers who feel their way to decisions that they later justify with deliberation?”[1]

Chief Justice John Roberts used a now-famous umpire analogy during his confirmation hearing in September 2005 to describe the role of a judge, or justice, as he saw it: “Judges are like umpires. Umpires don’t make the rules they apply them.”[2] This view of a judge’s role, which has been heavily debated,[3] paints a picture of the judge as a neutral and impartial arbiter. It brings to mind the image of a judge who decides disputes entirely rationally and consistently, based only on a case’s objective merits.[4]

However, a growing amount of research suggests that this picture of a judge—as an impartial and wholly rational arbiter—is not quite correct, or at minimum, incomplete.[5] Rather, empirical evidence suggests that judges are in fact “constrained by the boundaries of human cognition.”[6]

For instance, Peer and Gamliel reviewed several studies whose findings suggest that “irrelevant factors that should not affect judgment might cause systemic and predictable biases in judges’ decision-making processes in a way that could be explained using known cognitive heuristics and biases.”[7] Rachlinski and Wistrich, the latter a former federal judge himself, conducted research involving one thousand judges and found that judicial decisions run the risk of being less consistent than one might think, due in part to a “susceptibility to framing … caused by uncritical reliance on heuristic processing, which can be exacerbated by time pressure.”[8]

The extent to which judges rely on intuition, and are thus susceptible to heuristic-based persuasion, can be predicted fairly well by cognitive reflection tests (CRTs).[9] The purpose of the CRT is to “test[] a respondent’s ability to suppress intuition in favor of deliberation in a setting where intuition is misleading.”[10] Essentially, a CRT contains questions that are simple but require some deliberate processing; a reliance on mental shortcuts and intuition will typically lead to a wrong—but a predictably wrong—answer.[11]

For example: A bat and a ball cost $1.10 in total. The bat costs $1.00 more than the ball. How much does the ball cost? [12] Most people immediately answer that the ball costs 10 cents. [13] Intuitively this question seems obvious, but the point is that here, intuition leads to the wrong answer. If the ball costs 10 cents, the bat must cost $1.10, leading to a total of $1.20. Instead, getting to the correct answer (the ball costs 5 cents) requires suppression of the initial intuitive response. A typical CRT has three of these sorts of questions on it. [14]

Studies on judges showed that “judges performed about as well as the most educated adults on the CRT” [15]—not terribly well, answering on average just over 1 out of 3 questions correctly. [16]

A low CRT score suggests that judges, when making decisions in a generalized setting at least, often make intuitive decisions when they should be reflective.

Of course, a judge may make decisions differently in court than when making ordinary, non-legal decisions. And, judges can be forgiven for being bad at math. [17]

But, the CRT is remarkably robust, and it does actually predict fairly well the extent to which those who score well (engineers), or poorly (judges), use deliberate or intuitive reasoning in other contexts. [18] In recent years, studies have been done showing that judges are inclined to rely on intuitive judgments and only have a mild disposition toward overriding them.


[1] Guthrie et al., Judicial Intuition, at 2.

[2]9/12/05: John Roberts’ Baseball Analogy, ABC News (Sep. 12, 2005), https://abcnews.go.com/Archives/video/sept-12-2005-john-roberts-baseball-analogy-10628259.

[3]Compare Getting Beyond Balls and Strikes, NY Times (Oct. 23, 2018), https://www.nytimes.com/2018/10/23/opinion/getting-beyond-balls-and-strikes.html (arguing against the view that judges are umpires) and Kagan: ‘Umpire’ metaphor suggests judges are robots, Politico (June 30, 2010, 9:50 AM), https://www.politico.com/blogs/politico-now/2010/06/kagan-umpire-metaphor-suggests-judges-are-robots-027873 (arguing against the view that judges are umpires) with Is a Judge a Player or an Umpire? Heritage Foundation (Aug. 9, 2018), https://www.heritage.org/courts/commentary/judge-player-or-umpire (arguing in favor of the view that judges are umpires).

[4]See Peer & Gamliel, Heuristics and Biases in Judicial Decisions, at 114.

[5]See generally Peer & Gamliel; Rachlinski & Wistrich, Judging the Judiciary.

[6] Peer & Gamliel, Heuristics and Biases in Judicial Decisions, at 114.

[7]Id.

[8] Jeffrey J. Rachlinski & Andrew J. Wistrich, Gains, Losses, and Judges: Framing and the Judiciary, 94 Notre Dame L. Rev. 521, 573 (2018).

[9] Guthrie et al., The Hidden Judiciary, at 1498.

[10]Id.

[11]See id. at 1497.

[12]Id. at 13.

[13]Id.

[14]Id.

[15] Guthrie et al., The Hidden Judiciary, at 1498.

[16]Id.

[17] Rachlinski & Wistrich, Judging the Judiciary, at 13–14.

[18]Id.

TDL Brief: The Psychology Behind Charity

Benevolence is widespread. It can be easy to look at headlines and fixate on today’s hardships, but hopefully we can ground ourselves in knowing that the world is full of people who dedicate their lives to caring for others, who volunteer and donate with little in return, who make conscious efforts to help others even if in small ways.

The concept of charity as we know it originally developed in connection with religious institutions and notions of moral sanctity, but is now embedded in society-at-large. Charitable organizations make up a large portion of the non-profit sector. Our tax code allows tax deductions for donating. Through digital and mass media culture, our exposure to organizations in need of resources has skyrocketed. The ease in which we can donate, be it through a GoFundMe or quick Venmo in response to an Instagram call-to-action, ingrains charity into our daily routines. It can be overwhelming navigating how much to donate, what resources to give, and who to give them to. Yet this complex negotiation is a testament to the expansive nature of what it means to care for others as humans. 

Let’s go back to basics and take a deeper look into what motivates us to be charitable, and how an understanding of our cognitive processes can push us towards charitable behaviors.

1. Helping others has been an evolutionary strength

By: Scientific American, Why We Help (November 2012)

When we learn about how we evolved as humans, and how species evolve in general, we typically hear the phrase “survival of the fittest”. Our evolution is framed as “dog-eat-dog” with much of the onus of survival on the individual and their strengths. But then what accounts for characteristics like altruism and humanitarianism? Why do we share with and care for people we might not even know, in addition to those around us? 

There are a few different hypotheses for how these tendencies came to be. One posited evolutionary mechanism of cooperation is reciprocity. Most of us are familiar with the saying “I scratch your back if you scratch mine”, or giving to others in hopes that the favor will be returned when we are in need.  Reciprocal generosity is seen in other species, like vampire bats, and demonstrates how sharing can create long-run individual survival benefits. Another hypothesis surrounds kin selection, where individuals with the tendency to care for their closely related kin potentially decrease their own fitness, while increasing the chances of reproduction and survival for others in their gene pool. By doing so, genetic material promoting cooperative behaviors is likely to be passed on. 

Additionally, when groups of family and neighbors help each other it can create growing spatial clusters of cooperators that out-compete non-cooperative individuals. Laboratory studies on yeast cultures demonstrate this mechanism, but it is easy to see the benefits of community support in our everyday lives. Evolutionary perspectives grant insight on why we are the way we are, and why helping others can play such a large part in human life.

2. Giving makes us happier and healthier

By: Greater Good Magazine, 5 Ways Giving Is Good for You (December 2010)

There is something uniquely rewarding about giving to others, even though we expend our own resources to do so. One study gave participants a sum of money and evaluated their responses to A) spending the money on themselves, and B) giving the money to another person. The researchers found that even though the participants predicted they would get more joy out of spending it on themselves, they actually had a stronger positive reaction when giving the money to someone else. 

Our neurochemistry supports this claim, as giving activates brain areas of pleasure and social relation. Giving also elicits a similar wave of endorphins to a “runner’s high”, which in this case is referred to as a “helper’s high”. These mental health benefits reverberate into our physical well-being, lowering stress levels and providing significant health benefits to the elderly and those with chronic illnesses.

3. We give more when we emotionally connect

By: Chicago Booth Review, How charities can get an edge (March 2020)

So, we’ve established some factors in why we give and the benefits of giving, in an evolutionary sense and in a more immediate individual sense. But in a time where the opportunities to donate seem endless, what draws us towards certain organizations rather than others? Experimental data shows that we are much more likely to respond to donation pleas that introduce a single person that would benefit from our help, rather than quantitative information on the impact of the charity. This phenomenon is otherwise known as the identifiable victim effect. Giving is often driven by our emotional responses and personal connections, rather than a rational deliberation of overall impact. 

Since our helping behaviors started on a more close-knit level (i.e. between family, friends, and neighbors), it makes sense that personal connection would trigger altruism. Amidst the mass media landscape, we intake more information than ever and are exposed to hardships across the globe. We respond with empathy to most news of devastation, but it can be hard to truly grasp the gravity of a situation that is foreign to you. In order for charity marketing to trigger an action-inducing level of empathy, they often must create a bridge between the audience and the cause itself.

4. When we give, it inspires others to give

By: The Guardian, The science behind why people give money to charity (May 2015)

Characteristics like kindness and generosity are driven by our deeply social tendencies. However, our charitable behaviors are highly influenced by external social factors as well. The altruism of the people around can push us towards giving, and alter the amount of time or resources we are willing to give. 

Studies show that we are more likely to donate to a cause if someone we know is asking for the donation; this includes family and friends, as well as prestigious names like the Bill and Melinda Gates Foundation. Charities can use these social influences to amass more funding. For example, if we use an online donation platform, we might be offered to input e-mail addresses of people we know in order to reach more potential donors. When someone receives an e-mail saying “Your friend _____ is asking for your help…”, they are more likely to donate. Additionally, if we are informed that the previous donor gave a large sum, we are more likely to donate a higher amount. So when you give, it’s not just your tangible contribution that has impact. It is also the precedent of generosity you set for those around you.

5. Giving fights powerlessness

By: The Decision Lab, Smart Giving for a Cognitively Saturated World: Nick Fitz and Ari Kagan

In today’s world, we are forced to navigate an increasing push towards individualism and rapidly growing sense of global interconnectedness, an often disorienting paradox. Amidst the pandemic, it can be especially difficult to process the loneliness of physical isolation paired with unending digital communication and information sharing. Further, it’s easy to become nihilistic amidst the 24/7 news cycle, inundating us with media on global hardships. When there are so many problems to tackle, and the problems themselves seem insurmountable, we can default to doing nothing.

However, finding meaningful ways to give can empower us against helplessness. When we are actively involved, rather than passively observing, it fosters a sense of connectedness and reminds us of our ability to contribute to change. One way to integrate giving into our lives is through Nick Fitz and Ari Kagan’s new app Momentum.  The app helps combat choice overloadby hand-selecting vetted charities personalized to our priorities. Momentum then allows us to pair donations with our every day actions and major events. This way, every time we take an Uber we could automatically donate a small amount to climate action groups, or when we march for Black Lives Matter we could donate to racial justice organizations. There are many ways to contribute to change, but if you are one of many struggling to figure out how to do so, Momentum might be able to lend some guidance.

Spinning a Web: Trust and Autonomy on Social Media

The classic character Svengali, from the novel Trilby by George du Maurier, is often referenced as the quintessentially manipulative figure. Having entered the cultural vocabulary to describe a devious, dishonest person, capable of extraordinary manipulation, a Svengali is seen as crafty and clever, and fundamentally untrustworthy.

In the novel, Svengali is a hypnotist, who seduces and exploits the helpless, innocent title character, Trilby.1 Preying on her vulnerability, Svengali demonstrates no conscientious apprehension in taking advantage of her, and turning her into a great, but entirely dependent, singer. An illustration by du Maurier, released shortly after the novel’s publication, portrays Svengali as a spider, spinning an intricate web, used to metaphorically demonstrate his cunning guile and talent for entrapment.2

The relationship between the fictional characters continues to be compelling as a story of exploitation. Trilby is a tragic figure not because of her difficulties, but because she is ultimately not the author of her own story. Svengali’s intervention in her life strips her of her agency, making her prey to his wishes and desires.

While du Maurier uses a literal web to depict this manipulation, various experiences on the virtual web are now raising similar concerns. The recent documentary, The Social Dilemma, portrays a community of Svengali-like Silicon Valley software engineers and social media platform designers.3 According to the documentary, influential strategists from companies like Facebook and Instagram have carefully manipulated their users’ experiences to elicit chemical reactions in the brain that foster dependent behavior from them.

This rapidly emerging dynamic coincides with a growing societal difficulty: while social media usage continues to rise globally,4 so do rates of anxiety, frequently associated with these platforms. This tension, where repeat behavior persists in the face of mounting consequences,5 raises questions about the role of addiction in social media usage.

This article intends to explore that dynamic further. Beginning by outlining how social media platforms are designed to elicit dependent behavior, it will then examine how dependency affects personal well-being, before concluding with a suggestion of how social media designers can potentially restore a sense of autonomy and satisfaction in their users. 

Behind the curtain: Manipulation on social media

Our neural circuitry largely responds to rewards and punishments. Put simply, we are motivated to seek out particular experiences or stimuli in our environments that we know can offer us rewards. As we come to learn that a particular stimulus has this property, and this association is reinforced through repeated experience, we become increasingly motivated to seek out that stimulus.6

To provide a simplified understanding of how this process works, imagine eating your favorite food. As you take a bite, deep within your brain, a chemical called dopamine is released, driving increased cravings and sensations of wanting. Dopamine marks the experience of eating this food as pleasurable, and links it with your surrounding environmental conditions.

As the driver of pleasure, dopamine serves as the chemical incentive to repeat an experience that brought us pleasure initially. That means, if dopamine release is triggered while eating a donut with your morning coffee, the following day, when you sit down to drink coffee again, you will likely crave a donut and feel the urge to replicate that experience.

While that offers a very simplified view of how the brain responds to incentives, the same broad effect can happen with virtual experiences as well. Research suggests that, in extreme cases, adolescents who use social media may experience the six core components of addiction (salience, conflict, mood modification, tolerance, withdrawal, and relapse).7 The platforms’ addictive qualities are largely founded on their unpredictable reward schedules, in which potentially infrequent rewards, largely based on social validation, are doled out via their core functions. 

It’s long been understood in psychology that rewards are much more powerful when they are given intermittently, rather than consistently. For example, when training a dog to sit on command, giving the animal a treat after every round of sitting is much less effective than giving it a treat at random, unpredictable intervals. Similarly, when a user posts a photo on a social media platform,  they cannot be assured of when other users will engage with their photo and offer the social validation that causes a surge of pleasure. Critically, this all operates beyond the user’s control, as the surge of dopamine is tied to the social behaviors of others. As a result, users develop addictive tendencies, compensating for their lack of control with higher rates of engagement that facilitate greater opportunities for social validation.   

Facebook’s “like” button serves as a prime example of a validation function on a social media platform. With this quick access to external validation, within the highly socially-sensitive adolescent cohort, teenagers are given small jolts of dopamine as they receive likes or shares, or any other function of social relevance and acceptance. Interestingly, studies have shown that the anticipation of this dopamine hit, driven by social validation, is potent enough to nearly match the reward itself, in terms of satisfaction.

Detractors and critics, like the expert commentators in The Social Dilemma, have taken issue with this deliberate design approach. They argue that social media platforms are being designed to exploit users’ desire for social approval and acceptance, particularly amongst highly vulnerable adolescents. From this position of vulnerability, anticipation, and dependence, anxiety in adolescents continues to rise at a meteoric rate.9 

This trend towards increased dependence on social media platforms presents an interesting question: is it possible to truly take satisfaction from the social media experience without being in control? And how can we reconcile this lack of control with the amount of information users are asked to absorb and assess on social media platforms?

Autonomy: The missing ingredient

Imagine riding a bike and struggling to feel as though you were in control of your direction. Every time you turn the handlebars, the bike resists, refusing to cooperate. Despite having your sights set on a particular destination, the bike seemingly maintains a mind of its own, leading you someplace else entirely. That situation alone is unnerving; it places you in a position of vulnerability to the machine. Should the bike spontaneously swerve off the road or plunge headfirst into oncoming traffic, you would be exposed to the consequent harms. 

There’s no guaranteeing that a collision would be the inevitable outcome. Equally, however, there’s no ensuring that you would be able to prevent that from happening. While that sense of helplessness is enough to provoke anxiety in most, imagine the bike also has means of communicating with you, implicitly. While you initially resist and battle with it for control, it slowly begins to offer you feedback that its course of direction is appropriate. Your instincts begin to waver; your wants and desires are thrown into a state of flux. These messages from the bike are repeated time and again, sending you spiralling into a state of apprehension, uncertainty, and confusion, until the bike has convinced you that its way is best.

This scenario isn’t portrayed in an attempt to stoke fear, but rather to heighten awareness and empathic understanding of the vulnerability, anxiety, and dissatisfaction that comes with a lack of control. As the previous sections outlined, dependency and addictive tendencies resulting from social media usage are rising. Research has shown us that dependency, on a wide range of substances and behaviors, can cause a disruption in core cognitive functions. Attentional bias, for example, suggests that individuals exhibiting dependent tendencies will pay disproportionate amounts of attention to the stimulus triggering the behavioral dependence.10 This has implications on their preference generation, on their decision-making, and on their subsequent behaviors and actions.

All of this is sure to have a detrimental effect on people’s overall well-being. Self-determination theory, presented by the psychologists Richard Ryan and Edward Deci, points to three complementary qualities that ultimately underlie an individual’s well-being: mastery, relatedness, and autonomy.11 In examining the seemingly bleak picture painted above, it is immediately apparent that autonomy—the perception of self-directed freedom, or control over one’s own actions—is grossly lacking. Actions on social media are overtly primed, sequences of movement are scripted, preferences are predetermined. As users, the feeling that we are not acting according to our own volition is unsettling, and contributes to the stress and anxiety many of us now feel with regards to social media.

Who’s in charge?

Recent developments in consent procedures provide a ray of sunshine in improving user engagement with social media platforms. Dynamic consent, a procedure dedicated to greater user involvement and control over their engagement in scientific experiments,12 offers an alternative vision to manipulation and vulnerability. By providing participants with greater decision-making authority with respect to how their personal data is used and distributed, dynamic consent attempts to re-instill a sense of awareness, control, and autonomy in participants’ experiences.

Conventional consent models have traditionally operated on either a single, blanket consent format, or according to an opt-out model.13 While these models technically involve the user in agreeing to the procedure, they often fail to offer the nuance and flexibility that allows the user to feel a sense of control and autonomy over exactly how their information is being used. For example, participants are not given precise authority on where their data is being distributed once it’s been collected. Rather, participants offer overarching consent that ultimately grants the experimenter control over how the data is disseminated.

Dynamic consent rectifies some of these decision-making imbalances, involving participants in a continuous consent procedure, where their consent is requested at several stages throughout the experimental process. In this, participants are given more control over their data, and generally experience greater autonomy in their experiment engagement.   

Introducing a form of dynamic consent to the user experience on social media platforms may provide a solution to this problem. Adopting the dynamic consent procedure, where users continuously select how they would like to engage with the functionality, data collection practices, and advertising exposure of a given social media platform, offers a potential solution to the lingering problem of the lack of trust and autonomy in the user experience. Choosing to disable the like button, for example, should users feel uncomfortable with its effects, would allow the individual to exhibit the autonomy that ultimately underlies satisfaction. In this, users are once again allowed to take control of their experience and determine precisely how they would like to engage with the platform, including opting out of the features that potentially facilitate addictive tendencies.

Broadly speaking, this approach offers an alternative vision of the user experience. If the traditional social media experience is defined by user vulnerability, the hope is that introducing dynamic consent will prompt feelings of balance, control, trust, and satisfaction in users.

How Social Norms Complicate Behavioral Research

Do I contradict myself?

Very well then I contradict myself,

(I am large, I contain multitudes.)

Walt Whitman (1819–1892)

Walt Whitman was not alone in contradicting himself. We all contain multitudes; different rules of behavior that depend on the situation or context we find ourselves in. Our minds are generous at glossing over these contradictions; there is comfort in seeing ourselves as consistent, steadfast individuals. But we are heavily influenced by the numerous social groups we belong to, and the social norms associated with them. These dynamics are challenging to map and predict, which threatens consultants with unpredictable results. Behavioral scientists that hope to make change by altering social norms must do so with reservations and several strategies in mind.

Social norms, social chameleons

The human prefrontal cortex bestows on us the ability to respond and adapt to complex social networks unlike any other animal on earth. We are social chameleons, shrewdly altering behavior to the norms of the group with which we currently identify.1 I, for one, have discovered  that my wife’s work personality is remarkably different from the one I was previously accustomed to, now that we are both working from home. The influence of social norms can be subtle and mutable, yet it is striking to observe someone’s behavior when you are unfamiliar with the social milieu they are interacting with. 

Behavioral economics, which often utilizes low-effort “nudges” to influence behavior, can easily be knocked off course by anomalies in actual versus predicted social behavior. A recent study found that the type of behavioral intervention that failed most often was those involving social norming or social comparisons (40% of 65 cases).2 A principle explanation postulated for this failure was the disparate responses of subgroups of the target population to a social norming message. By not considering the context and framing of different scenarios, we can miss out on forecasting the divergent response of various individuals to the same basic situations.

The intricate nature of social norms

A major issue with influencing behavior via social norms is that the group or social identity with which a person identifies is variable and context-dependent. 3 We exhibit different behavior depending on what group we currently identify with, and we rapidly switch our perspective depending on the context of the situation we find ourselves in.

The D.A.R.E. (Drug Abuse Resistance Education) program, a U.S. campaign to reduce drug use among teens that was most active in the late 1980s and 1990s, serves as an example of a behavioral intervention gone awry due to social norms. Research on the program has shown not only that it was an ineffective use of hundreds of millions of dollars, but that in some cases it actually increased drug use amongst teens.4

One of the shortcomings of the program was that, by imploring teens to ignore peer pressure to use drugs, it made drug use seem more pervasive than it actually was, thereby positioning it as normative behavior.5 Meanwhile, programs that position drug users as independent and autonomous can ironically appeal to adolescents seeking to identify with such traits.6 This type of Catch-22 situation can make it extremely difficult to find the right social norms to engage in an effective behavioral science intervention.

Additionally, social norms tend to operate in an intangible and implicit manner7—they are difficult to consciously identify and we tend to underestimate their impact in our lives. Have you ever met someone who openly admitted that they purchased a BMW or Mercedes-Benz purely for the status that it afforded them?

Dealing with behavioral volatility

It’s rare that a behavioral intervention forecasts the influence of social norms completely accurately. But there are a couple of things that make this sort of intervention more exact.

Firstly, the effectiveness of an intervention campaign utilizing social norms can be improved by considering when and where the subject will be exposed to the intervention, and whether the relevant targeted identity is salient at the time.5  

Distinguishing all the groups your target will identify with and the strength of these associations is valuable. Likewise, understanding why, how, and when they identify with these groups is important. By segmenting and mapping out potential in- and out-groups and their influences on behavior, you have a better chance of identifying which factors are important to success and what you need to do to improve effectiveness of your actions. 

Another approach is to identify what works for whom and the contexts and scenarios that enable this. Then you could target a particular sub-group or utilize different actions for the various sub-groups. For instance, in the case of the D.A.R.E. program, having two different anti-drug programs—one for schools where drug usage is above the national average and one for schools where it is already low—could possibly help alleviate some of the problems found in the original program. By framing the communication according to the type of group identified, programs can be customized according to the needs, motivations, and norms inherent to a particular profile. 

For example, having the police present the D.A.R.E. program’s message could be problematic in a country where overall confidence in the police is below the 50% mark for certain segments of the population.9 If the program is defining drug usage as an out-group behavior to a group of teens who are seeking autonomy and are distrustful of authority, a police officer is probably not the best spokesperson for the job. Perhaps utilizing different spokesmen to communicate with different groups would be a better option. 

When it comes to research and testing, what someone in a room filled with other people tells you about their behavior should not usually be taken at face value. Using implicit research methods such as Implicit Association Tests (IATs) and priming experiments, which detect the strength of a person’s subconscious association between mental representations of concepts, can help identify factors that are hidden or hard to articulate.

Lastly, observing actual behavior is key. Run small-scale experiments if possible. Try A/B testing, or rolling out separate interventions to different groups of participants and comparing the results (with proper controls and sampling of course). Don’t go all-in unless you have a lot of evidence that your theories are validated.

The multitudes we all contain make our actions challenging to predict. If we contradict ourselves, it’s often due to the turbulent impact of social norms on our behavior.

TDL Perspective: The Future of Preferences

“I think the biggest questions in neuroscience that haven’t been answered yet are going to be huge drivers of what we decide to do as a species with technology.”

Foreword

The TDL Perspectives project is an ongoing series of interviews with thought leaders who are involved in our mission of democratizing behavioral science. We pick out specific insights that are at the frontier of current events in behavioral science, whether that means applications that plug our insights into various industries or theoretical discussions about the contentious frontier of current research. If you have thoughts about these discussions, have expertise you’d like to share, or want to contribute in some way, feel free to reach out to Nathan at nathan@thedecisionlab.com.

Introduction

Today, Sekoul Krastev, a Managing Director at The Decision Lab, sits down with Nathan to discuss artificial intelligence and the future of human choices. We zoom in on the intersection between AI and behavioral science to understand how the decision-making landscape undergoes periodic transitions and what can be done to make the world a better place in this context. We deconstruct the various ways that people in relevant fields think about human and machine cognition. Then, we look to the future of technology to understand how these different understandings of decision-making inform potential solutions to current problems. 

Key take-aways

  • Even if decisions are made in an instant, the process is one spread out in time. This is not always the case for machines and it changes the way they are designed.
  • The biggest difference between people and AI may have to do with the way they are set up, rather than their cognitive processes once outcomes are determined. 
  • A finite amount of information is really important for a well-functioning AI system.
  • Value-based choice is still an open question, one currently beyond the reach of automation.
  • Technology makes for a more influential individual but it comes at a price.
  • The speed at which society makes choices is perpetually faster than regulatory norms, so it often falls on people in tech companies to make significant decisions about how we go about our lives.
  • Behavioral science may be a key factor in changing the race between technological development and ethical frameworks.

Discussion

Nathan: I have Sekoul with me today and we’re going to talk about AI and behavioral science. Let’s jump right in. People often look at AI as an alternative to human decision-making. People propose that artificial intelligence can replace human decision-making in a number of contexts, especially when we recognize that our decision-making is flawed and that we’re making avoidable mistakes. Do you see artificial intelligence as an alternative to human decision-making?

Sekoul: I think in some contexts it can be. Artificial intelligence is a pretty broad term. It ranges all the way from fairly simple statistics to black box algorithms that solve complex problems. So depending on the decision you’re trying to automate, I think you have different types of success.

Sekoul: In a very simple scenario where you’re trying to determine, for example, if an image is of a cancerous or non-cancerous cell, that’s a decision that’s historically been made by professionals who are trained at that. And we know that AI is now better than humans at making the diagnostic

Nathan: What do you think AI is doing there that we’re not? Is it a question of being able to input information better? Or just selecting the right sort of information? What do you think the difference is?

“It’s been shown in experiments that humans can [intuit] implicitly, but cannot actually describe many of those features explicitly. So the AI is able to do a lot of, what we call, intuition, which is essentially processing large amounts of data to come up with a very simple outcome.”

Sekoul: I think it’s able to pick up on information in a more perfect way. I think over the course of a career, a professional might learn to intuitively understand features of the image that would predict one outcome or the other. And I think the AI can do the same thing much more quickly. The reason for that is that you have a very clear outcome. And so you’re able to give feedback to the AI, and tell it when it is correct, when it is incorrect. When you do that, it learns what features are predictive of an outcome and what features aren’t.

Sekoul: It’s been shown in experiments that humans can do that implicitly, but cannot actually describe many of those features explicitly. So the AI is able to do a lot of, what we call, intuition, which is essentially processing large amounts of data to come up with a very simple outcome.

Nathan: Let’s talk about that a bit more. What do you think intuition is made of? Because I think that’s one thing that’s sometimes very kind of misunderstood in behavioral science, is the idea of our processing power, that we aren’t necessarily aware of.

Nathan: Daniel Kahneman, in his pretty famous book, Thinking, Fast And Slow, talks about how expert judgements are made in the blink of an eye, in a way that we can’t really recognize as a kind of thorough deliberative precise choice. It’s one that’s achieved without any conscious processing. So do you think there are unconscious systems at work there that are similar to computational machine learning systems? Or do we have certain ways of processing information that our current AI systems haven’t caught up with?

Sekoul: So we don’t know enough about how the brain processes information to really say. It’s very likely that it does so in a way that we haven’t been able to replicate with AI yet. There’s certainly, I mean, if you think about the philosophy of neuroscience or cognitive science, there’s certainly an experience of making a decision that AI probably isn’t creating, whether that’s the qualia or the experience of making the choice.

Sekoul: In terms of just purely information processing, intuition is something that basically takes large amounts of data, and because our attention can not possibly attend to each piece of information, it sort of points the spotlight to the very small part of it or maybe even none of it. And you just have a feeling that something is correct or not.

Sekoul: So just because something doesn’t enter your conscious awareness, doesn’t necessarily mean that you’re using a different system to make that decision. I think that’s a misconception. And there’s actually a lot of research showing that, even for decisions that we consider conscious, a lot of the processing happens before we’re consciously aware of the decision. There’s even research from a couple of years ago, showing that when people are asked to reach for an item, the commands to reach for the item actually comes before the conscious awareness that you made that decision. So, I mean, people use that to say that there’s no free will.

Sekoul: Interestingly, the reverse of that is that there is a free won’t, meaning that you can cancel the action as you’re reaching towards the item, up to the very, very last second of it. So you have conscious control over aborting the action. But in terms of choosing to do it, it seems like the conscious awareness is a little bit detached from the information processing. Which is to say that, for sure there are decisions we make more deliberately and less deliberately, but in both cases, we’re processing information using the same systems. And we’re essentially creating an outcome that’s based on techniques that are somewhat similar to what AI is doing.

Nathan: It’s funny when I think about, especially in an academic context, the process of decision-making or the experience of decision-making, as you were saying, I find that a lot of my preconceptions about how it works fall apart quite quickly. 

[read: Taking a hard look at democracy]

Nathan: One recent example is how a voter makes their voting choice and what point in time is that choice actually happens. In this example, there’s information processing going on for weeks before an election. People clearly collect information from friends, from advertisements, from political figures, watching speeches, watching debates, whatever. But there’s no clear point of decision. The experience of making decisions is actually quite distributed over quite a long amount of time.

Nathan: And I wonder, if our machines are given that same ability to process information over time, because usually we expect an output spat out right when we want it. And I guess that’s the same for humans too. Where we’re calling an output once you’re in the voting booth or once you’re at the doctor’s office or whatever it is.

Sekoul: I mean, that’s interesting because, again, your experience of your own opinion might need to crystallize at a different point. So, if at any point, you asked the person who they will vote for it will crystallize in a particular way, depending on how they feel at that moment. And it’s the same thing with an AI, at any given point they’re running averages for different outcomes. There’s obviously different ways to get an outcome. You can have things that compete with each other to a finish line. You can have things that are going different directions and get pulled up and then start to go down. And then as soon as you reach a threshold on the upside or the downsides, you get to a decision.

Sekoul: There’s different ways to do it. But ultimately we, at any given point, can rush that decision and have some sort of a system check. And that’s true for AI. That’s true for humans as well.

Can we manage uncertainty with cognitive shortcuts?

“Algorithms are just typically designed more deliberately. As opposed to, for example, a voter who may not have access to all the information that pertains to the decision that they’re trying to make. And that’s where I think algorithms are more powerful. It’s not so much in the execution, it’s more in the setup.”

Nathan: One idea that ties a few of these strands together is uncertainty management. These days everyone talks about how uncertain the times are, right? To me, this amounts to the feeling that we don’t have enough information to make decisions that we are faced with. If we think of a person as a system that’s constantly intaking information and is called to make decisions at somewhat unpredictable moments, there may be interventions that mediate the way we’re receiving information, right? 

Nathan: I wonder if you think AI can help with these sort of uncertainty management problems where policies can’t be fully constructed because we don’t have enough information to achieve what we decide are ideal outcomes. So, in a pandemic response, for example, you don’t exactly know how well people are going to cooperate with whatever policy you implement. And that introduces a forecasting problem.

Nathan: People talk about how we use certain heuristics. We use shortcuts and cheap ways of processing data in order to come to conclusions, even if we don’t have all the relevant information. Do you think that way of processing is something that machines can adopt? Or do you think there’s benefits in machines finding other ways of making those decisions without the shortcuts?

Sekoul: I think ultimately we take shortcuts for the same reasons that machines need to. There’s finite computational resources in the human brain, as there is in a computer. And in fact, if you think about a computer, the resources are even more finite, in other words, they have less processing power than the brain. So if anything, machines need to simplify the data and the decision even more. That said, the timeline that they deal with typically isn’t the same that we deal with. It’s pretty rare that you ask an algorithm to make a decision, a complex decision like the one you just described, extremely quickly. Whereas a human might be asked to have an opinion about something like that very, very quickly.

Sekoul: So I think algorithms are just designed more deliberately typically. As opposed to, for example, a voter who may not have access to all the information that pertains to the decision that they’re trying to make. And that’s where I think algorithms are more powerful. It’s not so much in the execution, it’s more in the setup.

Sekoul: Now, if you took a human being and you trained them to understand different topics and to understand the relationship between those topics and an outcome, et cetera, et cetera, et cetera. If you could somehow get over all of their past training and experience and convince them to look at the data dispassionately and purely think about, okay, this is the outcome, and these are the policies that are likely to lead to it with X percent likelihood. If you could do that, I think a human would be better than an AI at making decisions.

Is value-based choice a solely biological process?

Nathan: Well there’s a whole other question of value in those decisions. And assigning value to different outcomes. In a purely mechanistic sense, as long as your outputs are completely deliberate, like we were talking about before, assigning value is not actually that difficult. Because you can compare how close a certain step gets me to my final goal.

Nathan: But with political decision-making or moral decision-making, you have a problem of value being contested all of a sudden. So that probably poses quite a challenge for machines that are trying to make these sorts of decisions.

“Is it preferable to use a purely evidence-based way of making decisions? As individuals, sometimes maybe. As groups, probably not, because people have preferences. So ultimately, it’s very difficult to understand what a preference is composed of. I think people assume that preferences are composed of purely an outcome, which science is very good at predicting in some cases. But I think preferences are more complex than that.”

Sekoul: I think that’s where it gets a little bit messy. Value-based choice, it’s a relatively new field in neuroscience and psychology. And we don’t understand value-based choices that well. We know that a lot of it is driven by the prefrontal cortex. So it seems like we’re fairly deliberate about those kinds of choices. But we also know that, depending on the situation, there are different levels of effect of the emotional centers of the brain that can override that deliberate choice.

Sekoul: There’s a dynamic in how that decision is made in the brain. That makes it very difficult to understand to what extent the outcome is affected by different information. Especially when you think about the fact that a lot of the emotional response that we might see is driven by experience that has taken an entire lifetime to form. That’s the part I think that is really difficult to operationalize in an algorithm.

Sekoul: So you might operationalize the prefrontal cortex. You might say, I’m trying to get from point A to point B, and this policy will help me get there. And from a purely prefrontal cortex perspective, all you need to do is make a plan and draw the shortest route between the two points. And that’s your optimal solution. An algorithm can do that. Again, assuming you have finite information, and you give the same information to the person and the AI.

Sekoul: A purely prefrontal view of how value-based choice is made might be fairly similar between an algorithm and the brain. But as soon as you involve other brain centers, and of course, it’s not that simple, I’m kind of reducing it to that, but there’s definitely a mystery around how emotions and past experience, memories, et cetera, will drive that decision in different directions. And that’s something that the algorithm can’t simulate as easily. Just because we don’t understand exactly how that effect is created.

Nathan: Right. That makes a lot of sense. Are there ways that an algorithm could maybe take off some of the cognitive load of decision-making? Could we take the parts of our processing that we do understand and chop it up into parts that could be assisted through technology? Could we use AI to simplify our domain of choices that we have to make?

Sekoul: I wouldn’t say that it’s AI that we would use in that case. I mean, the answer is definitely yes. But to an extent, it’s science that does that. Science does that for us as a society. So we look at the best scientific consensus we can get on a topic. And we consider that to be a data point. But I don’t think anyone uses that alone as a driver to make decisions about anything in their life.

Sekoul: So is it preferable to use a purely evidence-based way of making decisions? As individuals, sometimes maybe. As groups, probably not to be honest, because people have preferences. So ultimately, it’s very difficult to understand what that preference is composed of. I think people assume that preferences are composed of purely an outcome, which science is very good at predicting in some cases. But I think preferences are more complex than that. How do you get to the outcome? And what’s the feeling you had while getting there? 

Nathan: It’s interesting that you mentioned science or technology as a way of facilitating decision-making. Because I think there’s a really complex relationship there between technology that hypothetically improves our lives, makes our choices simpler, and gets us to better outcomes quicker. But I think a lot of people also see science as something that’s complexifying the world. That gives us a whole bunch more options all of a sudden, and opens up new frontiers of decision-making. But also makes our environment a lot more stressful for the same cognitive apparatus to be processing.

How do humans handle advanced technology?

Nathan: Do you think there’s any value to that concern that advanced technologies that the user doesn’t fully understand challenge their ability to make their way through the world?

Sekoul: I think technology definitely opens more doors, and because those doors allow for more actions and decisions, it creates complexity, it creates cognitive load, it makes our lives more difficult. It also makes us more productive. I think the average person today is probably orders of magnitude more productive, just as an individual, and their effect on the world is more profound, compared to somebody a hundred or a thousand years ago. I think technology has that amplifying effect. And by virtue of amplifying our effect on the world, it necessarily brings in this increased complexity because we’re basically affecting reality in a more significant way.

Nathan: I think one interesting place to go from here is that we not only have more control over our world thanks to technology, but we also have control over that technology, especially people that are designing this. And I think there’s a key role here in terms of people that are designing the technology that facilitates our interaction with the world. Do you think that there’s certain ways of designing that technology that are beneficial? Perhaps like bringing in behavioral science to make that technology better? Do you think that’s a valuable use for behavioral science?

Sekoul: Yeah, definitely. There’s different ways to do it. I mean, user experience design has been around for such a long time. And creating interfaces that lend themselves better to people expressing their preferences and opinions and so on. I think that’s something that’s really powerful.

Sekoul: I think creating a shorter distance between humans and the control over the technology is really important. That’s, for example, what Elon Musk is doing with Neuralink, obviously a lot of criticism around that for various reasons. But ultimately the idea of bridging the gap between user and interface is a really powerful one. That’s, for sure, going to be a big topic over the next 30 years.

Sekoul: At the same time, I think understanding what people want when they’re using technology is really difficult. So as much as you can bridge a gap, and increase engagement, increase the speed at which people engage with the technology, et cetera. To actually understand what a user really fundamentally wants out of that interaction is quite difficult.

Sekoul: The reason for that is that there’s the short-term wants and the long-term wants. And in the short-term you might think, okay, well, this user is driven to more interaction when I put bright colors and give them lots of likes and comments and whatever. That’s great, but that just creates an ecosystem of dopamine hedonism or whatever. It basically creates a hedonic treadmill that people will engage with and get addicted to.

Sekoul: But ultimately in the long-term, understanding what creates actual value, from a humanistic perspective, in people’s lives is something that user experience design is very unlikely to ever get to. So I think that’s where behavioral science can come in, understanding the long term perspective, asking ourselves more existential questions about what our relationship with technology should be.

Sekoul: The problem is, you can talk about that philosophically, but how to operationalize that, how do you operationalize something that we’ve spent thousands of years trying to understand, that’s really difficult. And I think that’s something that companies like Facebook and Apple and Google are struggling with more and more.

Nathan: Is delivering those long-term valuable outputs as opposed to preying on our tendencies towards certain kinds of salient products something you’ve seen in the field at all?

Sekoul: Yeah, I do think that they’ve shifted from delivering very short-term value to medium-term value. But I think the long-term value, at a personal and societal level, is just a huge challenge. How do you decide what long-term value for society looks like? 

Nathan: It is hard. I think an extension of that is that, especially for big companies, people with a lot of influence over the environment in which we’re making our decisions, actually have influence over what that long-term value is. I mean, we know that our extended preferences about the world are variable and are subject to certain influences. And I think, especially when we have certain people at the helm of a place like Facebook, where people are engaging with that every day, spending multiple hours there every day, they probably have some control over what people’s preferences are.

Who oversees the ethics of rapid technological change?

Sekoul: I think it’s interesting that people have been talking more and more about how some of these social media companies might have malicious intent. And they have a responsibility that they don’t fully realize.

Sekoul: I don’t know to what extent that’s true. What I do know is that technological advances come, paradigm changes happen, and as they do, there’s always a struggle to catch up. And I think the most recent one where basically it connected everyone in the world in the span of a decade or less. I don’t think any company or individual or group of people could have handled that in a good way. I don’t think it’s possible to do that slowly and deliberately. Just because we don’t understand fundamentally what that means. We don’t understand how the brain treats that kind of environment. We’re basically built to interact with 50 people in our entire lifetime. So when you expose us to 500, 5,000, five million, that becomes really confusing. And nobody can really know what that will look like. Especially because, well, it’s not happening to one person, it’s happening to everyone at the same time. So it’s a crazy complex system.

Nathan: Yeah. There’s no control.

Sekoul: I think rather than criticizing those companies, and of course they should be criticized for lots of things, but I think from an existential perspective, we, as a society, have to just think more about what value we want from those technologies. And it comes back to AI. I think understanding the problem we’re trying to solve is the most important part of all of this.

Sekoul: People use AI as if it is a tool that can help us solve many problems but they don’t emphasize the understanding of the problems enough. They’re thinking of AI more as the solution, but it’s only a solution to problems that are extremely well-defined. And I think we have to start defining problems better.

Nathan: And whose job is it to define those problems properly? Is it whoever’s tasked with trying to make people’s lives better through this technology? Or is there an antecedent, is it a political question of who’s assigned to it? Or it’s just whoever’s there in the moment? Whether you’re at the helm of a tech company, as we explode into this digital era. All of a sudden, it’s your problem just because you’re the one able to solve it.

Sekoul: I think people are literally in charge of solving those problems. There’s people whose job it is to solve the problem. And I don’t think they’re digging deep enough. If you’re designing a new interface for the iPhone, for example, it’s literally your job to think about that problem. But you’ve probably taken on a more short-term view. You’re thinking, how do I make this interaction faster? How do I make this more effective, efficient, pleasant to the user? How do I sell more phones, et cetera?

Sekoul: So ultimately the economic drivers will rush the decision. And I don’t think it’s those people’s fault. It’s not that they’re at fault. So if you follow that logic, then I guess you could say that those economic drivers are driven by consumers and the policies around those things. So definitely there’s a place for policies to slow down those decisions and make them a little bit more deliberate. I think we don’t fully understand how technology, how AI, how those things will affect us on a societal level. And I think it’s okay to sometimes slow down, take your time, and understand things before you fully leap into them. I don’t think that’s going to happen. So it’s more of a hypothetical, where it would be nice, but there are a lot of reasons it can’t happen just like that.

Nathan: I mean, we could take, and maybe we end with this kind of case study, we could take a case study of the almost instant reaction to COVID-19 by moving most of the world online, most of our social interaction online. And there was no one point where we could stop and say, wait, let’s all have a big group discussion about how to do this properly. Whether we’re going to use Zoom. What are the potential effects of taking six hours of class a day online? There’s no point of, this is coming back to what we were saying at the beginning about the point of decision, there’s no one place where you can stop and say, hold on, this is exactly what needs to happen.

Nathan: And so when we think about technology, especially artificial intelligence technology, as something you can only apply when that decision is crystallized and when we know exactly what outputs we want, we get into a tricky situation. So what do you think behavioral science can do to improve that process? Whether it’s slowing it down or just working in the moment as fast as we can to redirect some of the flows, especially the highest levels of design governance and business as well. What can behavioral science do in that moment?

Sekoul: COVID-19 is a very good case study for this. Because there was a rush online, at least in the Western world. I think you have to qualify that. Because a lot of, I mean, most of the world didn’t move to Zoom classes. Most of the world just kind of kept going the way they were going before in a lot of ways. Because they are under the poverty line or close to it, and had no choice. But for the part of the world that we’re in. And I think a lot of the changes that we saw happened extremely quickly.

Sekoul: And I think to a large extent, a lot of what technology offers us in a situation like that is a tool. And how we choose to use it just reflects the kind of immediate problem we’re trying to solve. In this case, we couldn’t see each other physically, so we moved classes online. That’s great.

Sekoul: I think what behavioral science can do in that situation is not necessarily block that from happening. I don’t think that’s realistic. But I think just understanding the effects, trying to understand what types of questions should be asked as you’re doing that. Trying to understand what are the problems that are being created and how might this affect people. Basically experimenting around this shift. And going towards a direction where you can make those decisions more deliberate and make small adjustments around them.

Sekoul: So for example, let’s say you did this completely without caring about the psychology of people, you just move people online and you say, okay, kids, just spend six hours a day on Zoom, that’s it. That’s one scenario. And you might end up with a good situation or not.

Sekoul: But another scenario is one where you move everything online. You try different things. You have classrooms where, for example, you have a lunch break and kids are allowed to hang out on Zoom. Other classrooms where you don’t do that. You have classrooms where there’s one type of Zoom interaction. Maybe they have little breakout groups where they interact in small teams, another one where it’s always a big group, et cetera.

Sekoul: And with time, I think you answer questions around what types of interactions are most positive. But I think that’s the value that behavioral science will bring. Ultimately, it will just give you more data around what drives positive interactions and positive feelings.

Sekoul: Again though, I think the bigger questions are around what happens if you were to do this for decades. For a very long time. Hopefully that’s not the case here. I think we’re months away from that not being the case.

Sekoul: But for a lot of what technology is offering us, it is the case. We’re heading towards a world where we can’t live without it. And that’s where behavioral science needs to ask more fundamental questions. That’s where fundamental behavioral science research comes in. Not just research as part of a company, but rather the research that’s done at universities around questions like what is it to be human? And what ultimately fulfills us? How do we process information?

Sekoul: I think the biggest questions in neuroscience that haven’t been answered yet are going to be huge drivers of what we decide to do as a species with technology.

Nathan: All right. That’s an excellent place to end it. Thank you for joining me.