Nudging, Democratized: A Guide to Applying Behavioral Science

From New York Times bestselling titles to Nobel Prize-winning research, the past decade has been transformative for behavioral science. What was once a mostly academic movement on the fringes of economics and psychology is today a field that fuels government, industry, and philanthropic work at the highest levels. There’s a reason for this quick ascent: unlike many other academic disciplines, research into human behavior has yielded insights that are digestible, and often intuitive, to non-specialists — and can help us inform and improve decisions in both our personal and professional lives.

Spreading these insights to a broader audience has been at the core of our mission since founding The Decision Lab in 2013. It is our firm belief that knowledge should be an equitably distributed resource — and that we should all learn about the forces shaping our behavior. To realize this goal, we aim to publish authors who can use everyday experiences and language to translate complexity into common sense, and make it interesting, too. Each of our articles has a simple formula: take a lesson or two from peer-reviewed behavioral research and explain it in a way that an unfamiliar reader can find not just understandable, but useful. Over the years, we’ve taken to calling this effort democratization — or simply, spreading information beyond its traditional boundaries and audiences.

As we’ve grown, it is in this spirit we’ve begun providing consulting work for organizations in need of behavioral and experimental expertise. These projects are driven by the belief that all organizations can benefit from understanding the behavior of their constituents or clients, their leaders or laborers. At its core, behavioral science is just a way for organizations to think critically about how their actions (e.g., marketing strategy or store layout) affect behaviors they are interested in (e.g., purchases or program enrollments), and to test the efficacy of different approaches empirically. We know that small tweaks in our environments can have large, consequential effects on our actions, and as such no organization is too small to incorporate behavioral insights into its operations.

It is with these goals in mind that we are thrilled to announce the release of Nudging Democratized: A Guide to Applying Behavioral Science. The book, co-authored by the brilliant Steve Shu and TDL editor-in-chief Andrew Lewis, provides a practical blueprint for organizations to start using behavioral science at low-cost and with relative ease. Aggregating lessons from his three-decade career as a management consultant, Shu provides a framework of application that will empower businesses of any size to benefit from the principles of behavioral science and human-centered design.

The book’s main contribution is in spelling out the process of Behavioral GRIT TM, which stands for business functions related to Goals, Research, Innovation, and Testing. By setting these priorities ex-ante, Shu marshals examples of how organizations have integrated behavioral science into their business practices, including the incubation of innovation centers and development of behavioral science overlay capabilities. Throughout the book, case studies provide a view of how real organizations have incorporated behavioral insights to achieve a wide range of desired outcomes. For each case study, Shu highlights an element of the GRIT framework to give the reader hands-on examples of its implementation. For Goals, we see a large financial services firm implement an in-house nudge unit to create and encourage behaviorally-healthy retirement plans. For Research, Shu explains the impetus for and practice of incorporating behavioral and experimental research into business practices — a concept he concedes is likely somewhat foreign to the average manager. In explaining Innovation, Shu walks the reader through a private company’s experience of designing an app that leverages behavioral insights to encourage individuals to think holistically about life planning. In the section on Testing, we see the evaluation of a successful effort to reframe information and nudge individuals from different income groups to save money.

Taken together, Shu’s recollections of past work with businesses of diverse size and purpose offers a rich narrative on the flexibility and practicality of applied behavioral science. The purpose of the book — in line with our efforts toward democratization — is to lower the barriers to entry, and provide managers with a toolkit for making use of behavioral insights in their organizations. For any reader uncertain of how they might translate broad behavioral concepts into actionable business strategies — or for any aspiring behavioral scientist seeking a glimpse into the types of problems and solutions they might expect to face in the field — this book is the place to begin. Pick up your copy here, and let us know what you think!

The Majority of Americans Are Checked Out At Work: How Behavioral Science Can Help Create Better Managers

There’s a crisis in the American workforce: 66% of workers are checked out at their jobs [4]. At best, these employees clock in and out, putting time but zero passion into their work. At worst, they resent feeling their needs aren’t being met, and are missing deadlines or work days altogether. The result? An estimated $480-$600 billion a year in lost productivity [4].

How can companies create a more engaged workforce? The answer goes beyond salary increases, job flexibility, and higher 401(k) matches (though, to be sure, those help). Research in organizational behavior suggests that having “transformational leaders” — or in other words, very effective managers — can induce employees to be more engaged, motivated, and productive. By cultivating effective leadership, companies become more successful. 

Although “transformational leadership” may sound like academic jargon, it encapsulates the qualities and behaviors of the ideal people manager. This type of leadership has four key components:

  • Instilling pride in employees, while gaining trust and respect and communicating the organization’s value and mission.
  • Helping employees set goals, communicating optimism about future goals, and providing meaning to everyday tasks.
  • Challenging ideas, taking risks, soliciting new ideas, and thinking outside the box.
  • Giving employees personal attention, empathy, and support, while celebrating each person’s contribution [2].

Unprepared Managers

Managers have a profound impact on the day-to-day engagement of their employees, accounting for 70% of the variance in engagement levels [1]. The more effective the manager, the more engaged employees will be. Unfortunately, companies frequently fail to hire quality managers — in fact, Gallup estimates this happens about 80% of the time — and many managers are ill-equipped to manage individual employees, let alone a team [1]. 

Why are so many managers ineffective? Part of the reason has to do with the manager selection process. When Gallup asked managers why they believed they were hired for their current role, the most common answers were they were either successful in a previous non-managerial role or had vast experience in their company or field [1]. The first – and most critical – step to increasing employee engagement is for companies to disrupt the typical hiring practices and demand hiring more effective managers who have a track record of transformational leadership qualities. Or, at the very least, companies should invest in training existing managers to develop these behaviors.

Creating Engaged Workers

Research shows that truly effective managers – ones who demonstrate transformational leadership behaviors – create healthier, more productive workers. A 2014 study by Walsh, Dupré, and Arnold examines how this leadership style affected employees’ psychological health [5]. Results show that transformational leaders are more likely to empower employees in a way that helped them feel self-determination, confidence, and competence, which directly impacts their psychological health. Empowered employees are also more likely to go above and beyond at work, which helps gain trust and respect from colleagues and increase overall job satisfaction. In a country where wages are stagnant and anxiety disorders are prevalent, companies should prioritize improving employee psychological health. In turn, this will help employees’ well-being and reduce unexpected costs—like turnover—from staff burnout. 

Additional research on highly effective leaders investigates how transformational leadership affects employee motivation, an important component of feeling engaged at work. The researchers find that transformational leaders are more likely to set concrete, challenging goals for employees, which energize employees and increase overall motivation [2]. Consider the fact that 79% of employees feel that they have little to no guidance from their manager [4]. This research suggests that by setting challenging goals for employees, leaders can increase motivation and help them feel more guided and supported.

Finally, other research explores how stressed leaders affect employees’ burnout levels. This study finds that the more emotionally strained leaders are, the fewer transformational leadership behaviors they demonstrate, such as communicating optimism, setting employee goals, and giving employees personal attention and feedback [3]. The results also support previous research showing stress can be transferred from managers to employees through dwelling on negative emotions and demanding work challenges. This research underlines what most people have felt before either as an employee or manager, or both: the more strained leaders are, the more likely they are to be poor managers and transfer their stress and negativity to staff, sapping motivation and causing employees to disengage. 

What if most employees felt optimistic about their workloads, received personal attention and guidance, and felt that their everyday tasks – ones often ignored yet extremely time-consuming – were appreciated? What if they looked forward to going to work? At present, the facts paint a bleak picture of how the average American feels about their job: 79% feel unmotivated to do outstanding work, 51% are actively searching for a new job, and just 15% feel their company’s leadership makes them feel enthusiastic about the future [4].

Everyone deserves to feel motivated, respected, and inspired at work. Far too many Americans across industries and levels of work feel undervalued, micro-managed, or left behind at organizations that often ignore their most valuable assets. If companies begin to disrupt typical hiring practices and, instead, cultivate leaders who demonstrate transformational leadership behaviors described above, both employees and companies will reap the benefits.

How to Protect An Aging Mind From Financial Fraud

Although aging is inevitable, financial fraud in old age isn’t. Elderly individuals in the US alone lose an estimated 3 billion dollars a year to financial scams. Our research offers insights into how we can prevent this troubling reality.

The simple use of the language ‘grandma scam’ or ‘grandparent scam’ attests to the fact that elder financial abuse has become normalized, even unsurprising, in our increasingly digital world.

Older populations are not more susceptible to financial fraud simply because of naivité or their limited technological literacy. More accurately, older populations are victims of cognitive decline far before they are victims of financial scams.

Financial decisions are already some of the most stressful we make. These complex decisions become increasingly difficult to navigate as we age. Our research team wanted to understand: why do older adults tend to be more susceptible to fraudulent situations and how can we prevent this phenomenon?

Step 1: Why are older populations more susceptible to financial fraud?

The behavioral science literature offers some key insights into why we become more susceptible to financial fraud as we age.

  1. Older adults tend to rely more on system 1 thinking That is, fast, automatic and unconscious – and less on System 2 – which leverages fluid intelligence to reason and think through problems deliberately (Peters et al., 2000). Our crystallized intelligence (which comes with knowledge and experience) does increase with age, which compensates for decreases in fluid intelligence. However, crystallized intelligence starts to decline around 70, which leading to an overall reduction in decision-making quality in the later years (Tymula et al., 2013).
  2. Older adults are more affected by choice overload They find it more difficult to navigate a proliferation of choice and may lead them to suboptimal decision-making when overwhelmed. Specifically, they find it more challenging to ignore and sift through irrelevant information when placed in a situation to do so (Besedeš et al., 2010).
  3. Older adults have reduced numeracy and are less apt at assessing risk Cognitive decline is closely associated with increased difficulty in using reason when evaluating numbers (e.g. about a 1% vs 0.1% risk) (Best & Charness, 2015).

Step 2: How can we alter the decision-making environment to reverse the cognitive effects of aging?

The behavioral science literature suggests that one of the reasons that older populations may be more susceptible within financial decision-making environments is because they are prone to paying close attention to details in the present while ignoring the bigger picture, a phenomenon that is also known as low-level construal’. This predisposition to think about details in the present (see the availability heuristic and salience bias) can increase vulnerability in financial decision-making environments. This is particularly troubling when many fraudulent emails leverage the urgency to ‘act now’. For example, a foreign lottery scam including the message: “Send a deposit of $200 within the next 24 hours to claim your lottery earnings”, brings the present front and center. Biases towards present actions are amplified when the consequences of certain behaviors will be felt much later into the future.

Perhaps then, prompting abstract thinking, or ‘high-level construal’ would promote critical thinking about financial decisions in older populations. Our hypothesis was that incorporating personal financial risk assessment questionnaires into investment decision-making environments would effectively promote abstract thinking, ultimately reducing financial fraud susceptibility amongst older adults (aged 60+).

Testing our hypothesis

Our experiment involved 102 North American respondents divided into equally sized younger (18-25) and older (60+) cohorts. The participants were randomly assigned to a treatment or control group. Both groups were shown an email about an investment opportunity, which incorporated many of the common red-flags of fraudulent pitches. After reviewing the email, the respondents assessed the pitch along several dimensions including appeal, willingness to invest, and perceived risk. The intervention for the treatment group was a Personal Financial Risk Assessment, which the respondents completed before viewing and assessing the email.

Fraudulent investment opportunity provided to participants in the experiment. According to the US Federal Trade Commission, financial scams are increasingly occurring online.

How a simple mind hack points to the potential of reducing susceptibility

When comparing the treatment and control groups we found a significant effect of the intervention on susceptibility scores in those aged 60 and over. The control group who did not complete the personal financial risk assessment was much more likely to consider the financial opportunity to be appealing, trustworthy, and they expressed a greater willingness to invest. Furthermore, there was no effect of the intervention on the 18-25-year-old group, suggesting the intervention specifically targeted the cognitive deficits associated with old age.

This graph reports the effects of a Personal Financial Risk Assessment intervention on susceptibility to financial scamming in 18-25-year-olds versus 60+-year-olds. This intervention was effective in reducing susceptibility (p < .05), but only in the 60+-year-old group, suggesting that it may specifically target decision deficits linked to cognitive decline.

Key Takeaways

As the world becomes increasingly digital, and populations age, financial scams may become more common. The more tools we can provide to vulnerable populations to protect themselves from “falling for it”, the fewer victims there will be to face the negative consequences.

Overall, the key things to take away from this article are the following:

  • Although people of any age can fall victim to financial fraud, older individuals are at the greatest risk due to changes in cognitive functioning.
  • Biases towards “detail-oriented” thinking vs “big-picture thinking” in older populations increase overall susceptibility to financial fraud.
  • Interventions, such as personal risk assessments which prompt “big picture thinking” encourages critical thinking, and can, in turn, reduce susceptibility to financial fraud.
  • Our findings support existing literature, which demonstrates that older adults strategically change their preferred decision modes from being deliberative to more heuristic-based in order to compensate for cognitive declines in everyday functioning (Mutter and Pliske, 1994; Yates and Patalano, 1999).


Best, R., & Charness, N. (2015). Age differences in the effect of framing on risky choice: A meta-analysis. Psychology and Aging, 30(3), 688–698.

Denburg, N. L., Tranel, D., & Bechara, A. (2005). The ability to decide advantageously declines prematurely in some normal older persons. Neuropsychologia, 43(7), 1099–1106.

Grable, J. E., McGill, S., & Britt, S. (2011). Risk Tolerance Estimation Bias: The Age Effect. Journal of Business & Economics Research (JBER), 7(7).

Kannadhasan, M. (2015). Retail investors’ financial risk tolerance and their risk-taking behaviour: The role of demographics as differentiating and classifying factors. IIMB Management Review, 27(3), 175–184.

Perez, A. M., Spence, J. S., Kiel, L. D., Venza, E. E., & Chapman, S. B. (2018). Influential Cognitive Processes on Framing Biases in Aging. Frontiers in Psychology, 9, 661.

Reed, A. E., & Carstensen, L. L. (2012). The Theory Behind the Age-Related Positivity Effect. Frontiers in Psychology, 3.

Roalf, D. R., Mitchell, S. H., Harbaugh, W. T., & Janowsky, J. S. (2012). Risk, Reward, and Economic Decision Making in Aging. The Journals of Gerontology: Series B, 67B(3), 289–298.

Sumit Agarwal, John C. Driscoll, Xavier Gabaix, & David Laibson. (2009). The Age of Reason: Financial Decisions over the Life Cycle and Implications for Regulation. Brookings Papers on Economic Activity, 2009(2), 51–117.

Weller, J. A., Levin, I. P., & Denburg, N. L. (2011). Trajectory of risky decision making for potential gains and losses from ages 5 to 85. Journal of Behavioral Decision Making, 24(4), 331–344.

Optimism Is Good For Many Things, But Not Pension Savings

I will live longer than average, social security will enable me decent living in retirement and if I will need more income, I will just take up a part-time job or delay my retirement for a few years. These are just some optimistic thoughts that many of us think about our retirement. The culprit? Optimism bias, or our tendency to overestimate the probability of positive future events and underestimate the probability of negative ones.

Optimism bias is considered to be one of the most prevalent and robust cognitive biases observed in behavioral economics that transcends gender, race, nationality, and age [1]. But like most things in life, even optimism has its negative sides that can have huge impact on our lives. According to the FED 40% of Americans, today can’t cover an unexpected $400 expense and the majority approaching retirement have no or inadequate savings bringing the nation to a retirement saving crisis [3].

Mortgage and student debt are at record heights and mounting studies suggest our over-optimistic outlook of the future is partly to blame, making further research in this field crucial for the financial wellbeing of future generations.

Why is the glass half full?

Tali Sharot, one of the leading neuroscientists in this field, describes optimism bias as a cognitive illusion to which we are blind to and live with, though without realizing its impact on our behavior. She provides a chilling explanation for its development. Since humans possess, as one of the few species, a unique ability of conscious foresight (mental time travel) this also means we are aware that somewhere in the future death and other bad things await us. Without developing positive biases during our evolution, we could not function normally every day knowing death is around the corner [4].

Biologist Ajit Varki argues that the awareness of mortality on its own, would for humans have led evolution to a stop, thus making our biggest evolutionary advantage – self-awareness, also our greatest weakness [5]. Haselton et al explain that cognitive biases like optimism are not necessarily design flaws as most perceive them—instead, they could be design features that evolved because they positively impact our health [6].

Sharot argues further that a brain that could consciously travel through time would be an evolutionary barrier and not an asset and that it is this combination of conscious prospection and optimism that made most of humanity’s achievements possible. Our brain’s ability to underestimate the likely hood of negative future events reduces our levels of stress and anxiety which is good for our health and without it, we would be grounded to a halt by all the worries of life [4]. From our childhood into our adulthood, we constantly simplify in our minds how the future will unfold and overestimate how successful we will be, and by that the optimism bias gives us certainty [1].

What the research says

The latter theory is confirmed by recent neuroscientific research that suggests optimism bias has deep cognitive roots. We encode undesirable information in a distorted manner, which leads to the other side to the relative amplification of desirable information [7]. We exhibit selective attention to incoming positive information concerning the future and our beliefs are then selectively updated in favour of this preferred positive information [8].

Sharot et al (2012) call this selective updating. It occurs in the brain regions known as the frontal cortex which is involved in monitoring prediction errors. Signals to code prediction errors for negative updates have been discovered to be much weaker than for positive ones, thus enabling selective updating for positive information [9]. Because of these findings, researchers focused on the brain’s reward system which involves the caudate nucleus and limbic system and discovered that the neurotransmitter dopamine has a central role in the reward stimuli processing [4]. But what does this mean in a wider context?

Why optimism is good for many things, but not pension savings

As the above theories suggest, optimism bias seems to be, from an evolutionary point, one of the keys behind human success that is driving us to pursue various goals. However, as Puri and Robinson nicely put it, “optimism is like red wine, a glass a day is good for you, but a bottle a day can be hazardous [13].”

The same optimism that drove us to explore the oceans also fuels our belief that nothing bad will happen to us, meaning we often fail to act in our best interest, such as putting some money aside for an unexpected expense and our retirement. We are inclined to see ourselves moving happily toward professional success, financial security, and stable health. Unemployment, divorce, debt, Alzheimer’s, and other common misfortunes are rarely factored into our projections [4].

One of the reasons 40% of Americans today can’t cover an unexpected $400 expense is that they think they won’t have to. Similarly, in our rosy-eyed view of retirement saving, we believe that social security in the future will not be so low and that we will just work a bit longer to bridge the gap between retirement needs and resources, although the Health and Retirement Study reveals roughly 37% of those working at age 58, in the end, retired earlier than they were planning [14]. The gap between when active workers expect to retire and retirees say they actually did is also clear from the latest Retirement Confidence Survey where workers continue to report an expected median retirement age of 65, while retirees report they retired at 62.

Prudential’s latest study also reveals the gap in optimistic outlooks of employees and reality as 51% of retirees in reality retired earlier than planned confirming the large gap between the age when workers plan to retire and the age when they actually do. Only 23% retired earlier than planned voluntarily, meaning they had enough money saved or wanted to retire, or were just tired of working. The majority of those who retired earlier than expected did so involuntarily. 46% because of health problems, 30% were laid off or offered an early retirement package, and 11% left work to take care for a loved one.

Retiring early has a substantial negative impact on retirement income – lower social security and lower private savings because of shorter saving period and retiring just 5 years early can reduce income in retirement by 36% which can be the difference between income that enables us decent living or not. The last few years before retirement are especially important to building our nest egg as they are on average also our top-earning years and loosing just one will reduce our retirement income substantially thus bringing a high price on our optimistic predictions about how long we will work.

Similar over-optimistic outlook is revealed regarding expected costs in retirement. 37% of retirees say their overall cost estimates turned out to be low. Healthcare costs seem to be the most underestimated, as 44% said they faced higher-than-expected costs [15]. Combine that with other behavioral biases (procrastination, loss aversion, herd behavior, etc.) researchers identified in the last decades related to pension saving and we end up with an overlooked crisis: the majority of Americans approaching retirement have no or inadequate savings to provide for decent living [3]. Retirement outlooks are not much better in Europe, where according to the latest ING International report 61% of Europeans, who have not yet retired, worry about having enough money in retirement and 54% expect they will need to keep earning in retirement.

Outside of retirement

Optimism also kicks in when we are buying our cars and homes or taking on student loans. Research has found most students taking on student debt underestimate the time they will need to pay back the lone and overestimate their future incomes. One of the reasons more than 40 million Americans currently owe $1.2 trillion in outstanding student loan debt [16]. In housing markets, over-optimistic beliefs have often been cited as major contributors to the run up in house prices prior to the recent financial crisis [7]. We see our dream home which is usually more expensive than planned, but we just take a bigger mortgage.

Of the $13.21 trillion of household debt in the U.S. in the first quarter of 2018, a staggering $8.94 trillion is comprised of mortgages according to the latest FED report. Why worry, as surely in the future we will get that promotion at work and our salary will increase, prices of real estate will rise. What if we don’t get that promotion or even lose our job? But that’s just being pessimistic, most would say and the last financial crisis has revealed the full extent of what happens, when our over optimistic beliefs don’t realize.

How can we avoid the dark side of optimism?

First and foremost, we need to acknowledge and accept that our predictions of the future are in most cases overly optimistic. That’s why we need to rely on facts and figures even if we don’t like them. Check the latest statistics on how high social security or public pensions are in your country and if the average public pension replacement rate is 50%, apply that to your current salary and ask yourself one simple question, can you live off that? And then ignore the optimistic voice in your head that is telling you it`s somehow all going to be OK and take some time to reflect on your judgement.

For our pension saving, we can take some advice from construction work. If you have ever built or renovated a house or an apartment, you probably experienced cost overruns. That’s why all experienced builders budget a bit more to be on the safe side and the British government published special guidelines for construction appraisers on how to make adjustments to estimates of project costs, benefits and duration, to counter optimism bias in planning.

Adjustments to counter optimism bias are now factored in budgets of most government projects [4]. The same logic can be applied to our predictions on how long we will work. If we estimate to retire at 67, rather take 3 years off and then base your calculations on that. According to Aegon, even a simple written saving plan goes a long way toward increasing our savings and those with a written retirement strategy are more likely to turn their intentions into actions. The research also establishes that the ones with a written saving strategy have on average much higher saving rates than those without it.

The role of technology

For the more tech-savvy, there is now an abundance of new mobile apps that help us save and manage our money by combining behavioral economics and the latest technology to reach our optimistic life goals without breaking the bank.

Acorns app has some smart behavioral science behind it – one feature known as “round-ups” automatically saves and invests your spare change by rounding up purchases to the next dollar and putting the extra change into an investment account. By doing that automatically without any mental effort people save and invest their spare change. Shlomo Benartzi, who is professor at the UCLA Anderson School of Management and co-author with Richard Thaler of the famous retirement savings program Save More Tomorrow, joined Acorns late last year as a senior academic advisor and behavioral economics committee chair. He now runs behavioral experiments to incentivize Acorns more than 4 million users to save and invest more.

His first experiment focused on whether framing savings in more granular formats – saving daily versus monthly, can encourage increased saving behavior. He asked one simple question in three different ways: Would you like to save $5 every day, $35 a week or $150 a month? Even though the total amount set aside is the same, only 7% opted to save $150 a month, compared to 30% who decided to save $5 a day.

The experiment revealed the power of framing, which is a well-known concept in behavioral economics, and in the experiment framing deposits in daily amounts as opposed to monthly, quadrupled the number of savers thus showing how a small tweak in language can convince users to save more. The reason behind it is according to Benartzi that smaller, more granular amounts, seem psychologically less painful and more feasible than larger, less granular amounts, so framing recurring deposits in terms of smaller, daily deposits should be more appealing to users across the income spectrum [17].

Qapital app also has some smart features that explore our behavioral biases to increase saving instead of reducing it. The app, which brought on Dan Ariely, a professor of psychology and behavioral economics at Duke, as its chief behavioral officer, gamifies spending behavior by creating fun saving rules and giving users positive visual feedback along the way. The apps goal, is according to it`s Swedish founder and CEO George Friedman, to make saving more automated and personalized.

By making saving more visual and connected to our daily life it increases our commitment towards saving goals and they found users who integrated custom saving rules, i.e. you could save a set amount every time the temperature dips below freezing, and put it towards a goal for a warm vacation, which is represented in your app by a nice picture of your future trip, saved many times more than users without custom saving rules.

Behavioral science also backs this up and as mentioned earlier in this article on the case of optimism bias, it has been demonstrated in numerous studies that our brain encodes future desirable information in a more amplified way [7] and we tend to experience more intense emotions about future events than those in the past. That is because in general, we have an expectation that future events will make us feel more emotional than already passed events. Additionally, we are also more likely to talk about how excited we are about something we have planned in the future compared to something we have already done [18]. Positive future events according to a recent fMRI Study activate part of our prefrontal cortex that give us a general sense of well-being thus confirming the theory that the mere anticipation of future holiday or some other positive event make us feel good, which brings great benefits to our well-being and mental health [9].

The future self as a stranger

Research by Hal Hershfield and his colleagues also provides interesting insight on how we can use advanced visualizations—digital avatars of ourselves at older ages to increase connection with one`s future self and by doing that increasing our willingness to save for our pension. In a study they ran fMRI scans on participants and found that the neural patterns seen, when people described themselves 10 years in the future, were different from those when they described their current selves.

In fact they were more similar to when people described strangers, showing a disconnect people feel with their future selves. That has a negative impact on their willingness to save because their brain does not feel that they will in fact be the ones receiving the rewards of those savings, but a stranger.

Would you save money for a stranger? The answer is likely to be no. To change this behavior they experimented by showing participants digital avatars of themselves at older ages and then asked them to allocate $1,000 between four options: buying something nice for someone special, investing in a retirement fund, planning a fun event, or putting money into a checking account. The ones exposed to aged avatars put almost twice as much money into the retirement fund as the other participants.

Concluding thoughts

To get ourselves out of this retirement savings crisis, we have to harness our optimism for the future towards clear visual goals that motivate and drive us (buying our dream home or travelling to Tuscany in retirement) and then let technology help us reach those goals as effortlessly as possible. In this way we should encourage people to focus on their future goals, write them down on paper or in a saving app, and provide simple tools, like the before mentioned apps, to automatically follow through on them, harnessing our optimism for the future for our own benefit. This way the glass will really be half full in the end.


[1] O`Sullivan, P. Owen. The neural basis of always looking on the bright side (Dialogues in philosophy, mental and neuro sciences), 2015.

[2] Vižintin, Žiga. Pretirani optimizem lahko tudi škoduje (How optimism bias can have a negative impact on our personal finance). Dnevnik newspaper, Volume April, 2018. Retrieved from:

[3] Rhee, Nari & Boivie, Ilana. The Continuing Retirement Savings Crisis. 2015. Retrieved from:

[4] dodan med linke[4] Sharot, Tali. The optimism Bias – A Tour of the Irrationally Positive Brain. New York, NY, US: Pantheon/Random House, 2012.

[5] Varki, Ajit. Human Uniqueness and the denial of death. Nature 460, No. 7256, 2009.

[6] Haselton, Martie G., Nettle, Daniel & Murray, Damian R.. The Evolution of Cognitive Bias. Part VII. Interfaces with Traditional Psychology Disciplines. 2015.

[7] Sharot, Tali, Riccardi, Alison M., Raio, Candance M., & Phelps, Elizabeth. A. Neural mechanisms mediating optimism bias. Nature, vol. 450, 102–105. 2007

[8] Eil, David and Justin M. Rao. The Good News-Bad News Effect: Asymmetric Processing of Objective Information about Yourself. American Economic Journal: Microeconomics, 3 (2): 114-38, 2011.

[9] Sharot, Tali, Guitart-Masip, Marc, Korn, Christoph W., Chowdhury, Rumana, Dolan & Raymond J.. How dopamine enhances an optimism bias in humans. Curr Biol, 22:1477-1481, 2012.

[10] Sharot, Tali, Shiner, Tamara., Brown, Annemarie. C., Fan, Judy., & Dolan, Raymond. J. Dopamine enhances expectation of pleasure in humans. Current biology : CB, 19(24), 2077-80, 2009.

[11] Nettle, Daniel. Adaptive illusions: Optimism, control and human rationality (In Evans, D.  & Cruse, P., Emotion, evolution and rationality – pp. 193–208). Oxford University Press, 2004.

[12] Lefebvre, Germain, Lebreton, Maël, Meyniel, Florent, Bourgeois-Gironde, Sacha & Palminteri, Stefano. Behavioural and neural characterization of optimistic reinforcement learning. Nature Human Behaviour. 1. 0067. 10.1038/s41562-017-0067, 2017.

[13] Puri, Manju & Robinson, T. David. Optimism and economic choice. Journal of financial ecnonomics. 86, No 1, 71-99, 2007.

[14] Munnell, Alicia H., Sanzenbacher, Geoffrey T. & Rutledge, Matthew S. What causes workers to retire before they plan? Center for Retirement Research at Boston College, 2015. Retrieved from:

[15] Employee Benefit Research Institute (EBRI) & Greenwald & Associates. Retirement Confidence Survey 2018. Retrieved from

[16] Perna, Laura W., Kvaal, James. & Ruiz, Roman. An Updated Look at Student Loan DebtRepayment and Default. Penn Wharton Public Policy Initiative, Volume 46, 2017. Retrieved from:

[17] Hershfield, Hal E. H., Shu, Stephen., Benartzi, Shlomo). Temporal Reframing and Savings: A Field Experiment (Working Paper). January 2018.

[18] Roberts, Martha. The joy of anticipation. 2014. Retrieved from:

[19] Mazar, Nina, Mochon, Daniel & Ariely, Dan. If You Are Going to Pay Within the Next 24 Hours, Press 1: Automatic Planning Prompt Reduces Credit Card Delinquency. Journal of Consumer Psychology. 2018. Retrieved from:

Nudging the nudgers, machine learning and personalized interventions: A Conversation with David Halpern

Art by versusthemachines.
Conversation with David Halpern from the BIT and Jakob Rusinek from The Decision Lab

“One of the telltale signs we’ll know if we’re successful is that we are not separated out as behavioral science anymore. You know what I mean? Empiricism won’t be ‘oh we do experiments.’ That’s just how you should do public administration. That’s how you should run things. Companies will just have this built into them, governments will have it built into them.”

In today’s episode, we are joined by Chief Executive of the Behavioral Insights Team (BIT), David Halpern. He has led the team since its inception in 2010 and was the founding director of the Institute for Government. Between 2001 and 2007, David held tenure at Cambridge and held posts at Oxford and Harvard. He has written several books and papers on areas relating to behavioral insights and wellbeing, including Social Capital (2005), the Hidden Wealth of Nations (2010), Inside the Nudge Unit (2015). David is also co-author of the MINDSPACE report. 

In this episode, we discuss the current state of the behavioral science industry and its role within the public and private sectors, as well as predictions for how it will evolve.

Specifically, we discuss:

  • Nudging against violence (domestic violence, classroom violence and civil violence)
  • What nudging means in 2019 and how it will evolve in the next 10 years
  • Behavioral science and machine learning: the implications of personalized interventions
  • Nudging the nudgers: making nudging more ethical through enhanced democratic deliberation
  • Interfacing public- and private-sector nudging for maximum impact
  • The skills and experience you need to work in applied behavioral science
  • How nudging should be regulated and who should decide the ethical boundaries of nudging
  • The future of the BIT: exciting projects and challenges

Key Quotes

From building an intervention to building a process
“The challenge is not just for the development of a one-off intervention, but also designing a platform and a way of working in your organization to be able to perpetually tune, refine, innovate, try new things. That’s often been one of the biggest steps.”

Some things are obvious but wrong
“Sometimes it’s being considered so self-evident that something is a no brainer; you say: ‘Of course, you should do the following.’ In medicine, particularly, this is referred to as the parachute test. It would be unethical to run a controlled trial because it’s so obvious that this thing is more effective [than what we currently use]. What is worth bearing in mind is there are now some very famous examples where people made that argument and when someone finally got around to doing a controlled trial, it turned out that the assumption was wrong. Wherever we can, we should check—and we should also check for other reasons, which is that: do we understand what the active ingredient is?”

People build their environment, and vice versa
“I’m quite a big believer in essentially forms of enhanced democratic deliberation that enable  samples of the public to walk that journey with us, hear different views and then let them make the decisions, essentially. Let them give us permission if they think it’s right to make the change. Put the power back into the hands of individual consumers, families and citizens to be able to shape the world, which in turn shapes them.”

The public needs to see inside the box
“I don’t think you just handle these arguments with a very elegant ethical internal argument. You do need to engage the public directly and have them be involved in shaping this, as sometimes put, it really matters who is nudging the nudger and we need to have strong mechanisms both in government but actually also in relation to private-sector players.”

Nudging at the public–private interface
“Some of the most interesting problems and puzzles in behavioral science are the interface between public-sector and private-sector operations. Designing markets that need that good companies and good products really come to the fore and grow. Right? What does that look like? How do you design two-way platforms that have the right characteristics that support innovation and growth?”

Nudging for good: conflicts and social mobility
“We’re really into what can you do to use behavioral science to stop conflicts and to reduce the reignition of previous conflicts which is very exciting. Social mobility will be another grey area and one of those great deep issues.”

When testing becomes the norm
“One of the telltale signs we’ll know if we’re successful is that we are not separated out as behavioral science anymore. You know what I mean? Empiricism won’t be ‘oh we do experiments.’ That’s just how you should do public administration. That’s how you should run things. Companies will just have this built into them, governments will have it built into them.”

Articles, books and podcasts discussed in the episode

David’s work

Other mentions


Jakob: Thank you so much for joining us today, David. We’re very honoured to have you.

Jakob: Today, we’ll be discussing some of the large trends in the industry, but before we do that, I think a lot of people who are familiar with you would love to know, what are you working on these days? What are some of the more exciting projects you can share with us?

David: That’s like asking, “Who’s your favourite Child?”. There’s a lot of exciting things that we’re doing. High up in the list, is some of the work we’re doing on domestic violence, which is tough. There aren’t very many interventions that work well. There is a particularly interesting intervention in Australia, which is going very well. We are also doing a lot of stuff around financial behavior and how it interacts with disadvantage and tunneling effects and so on.

David: Actually a lot of our work these days is also market design moving backwards and forwards between quite micro behavior like on sugar, if you change positioning or something, or different labels, how they affect what people choose right up to how does that go into design. We’ve been very, we spent a lot of time on the design of the UK sugar tax and sugar levy, which has been highly effective in terms of reducing volumes, so its that relationship between the micro and the macro.

Jakob: Great. Thank you so much for that David. If I can just pick one particular theme of the ones that you have just mentioned spark my interest because I happen to work a bit on violence prevention projects at the World Bank Group. You mentioned initially the topic of domestic violence and particularly in Australia. Could you share with our listeners a little bit more what this type of work entails?

David: Sure. On violence, in fact we’re doing quite a lot of work. Across the world a lot of the volume by the way of activity of police is involved in violence and as we see reductions actually settle on other kinds of form of crime or problematic behavior. It can range and we’ve been doing work in refugee camps to see if we can reduce violence in a classroom setting, which are very tough environment indeed, where people are trying to almost do crowd control. What is an effective intervention in that context? We’ve had some success, through to the work on domestic violence has been in relation to people, actually various points here where they turn up to court, but also we don’t have the results yet. We have very interesting intervention, which is in the field working with core workers and the people who are often trying to stop their own behavior and using implementation intentions to help them plan ahead, and think about what’s happening and so should act differently in the context.

David: The more extreme form of that is we’re doing some work in Latin American to see if we can make it less likely to have the recognition of a civil war, two wars start every year. One of the things I think that sits behind this is people are now maybe increasingly familiar with using behavioral science to get people to pay their tax on time or to get us to save more for our pensions or whatever. A lot of what human behavior is about is the flesh and blood stuff, and literally in the case of violence, of course just emotional and hot headed and groupish, and behavioral science ought to, and I think does have quite a lot to say in relation to these types of issues too.

Jakob: Fantastic. Thank you, David. Well, I may, as you said, I think the expertise that we bring or some of our listeners bring is and has been out there for quite a while is about how to apply behavioral science to change letters, et cetera. I think a lot of people would be interested in how to apply it to such complex issues as violence. Maybe we can circle back to that later, but let me move on and maybe shed some light for some of our listeners that may not even be familiar with the topic of behavioral science per se. Interest in applied behavioral science seems to be continuously growing. With that said views about what nudging, irrationality, behavioral economics or behavioral science are, have also shifted. If I would ask you David, what do you think nudging means in 2019, and how do you think this is likely to change in the coming years?

David: Well, essentially because different languages translate it in a slightly different way but with most people, I think it’s a gentle prompt. Because it’s often seen in relation to other kinds of levers, so financial incentive or allure with a very strong sanction attached to it. Nudging being something which is a much gentle prompt or cue than that which then in principle the individual can override if they want to do so and make a different choice. In that sense, it fits with its North American origins of being, if not actually choice and harnessing at least not choice reducing if possible. Maybe that’s what I assume means to many people still. That’s still, I think amongst those who are practitioners, it’s deepening and broadening in the areas that it’s being used, so people were used to it being a very simple one off little cue, like, Oh, you know, those people pay their tax on time.

David: Oh that’s interesting. Then you become more likely to pay your tax, result which of course we’ve now replicated in many countries. Let’s say it has lots of other aspects to it so it can be applied in relation to public service population. We’re just about to release results around clinicians to increase the referral rates of clinicians or patients who have symptoms that might be cancer with quite a big effect size, so sometimes you want to nudge experts behavior, or in relation to or another medical example. More than a million people in the world die still from Tuberculosis, and yet we’ve had a treatment for Tuberculosis since the 1940s. A key issue is that people don’t persist with their treatment.

David: The classic World Health Organization approach is what’s called Observed Treatment, so you literally watch someone making sure they’re having their pills. It’s pretty clunky and difficult and we’ve done an intervention and a trial in Moldova, which doubles Observed Treatment. That’s more than just a cue in a letter, some of these things are much more expensive. It’s a pretty exciting area. It continues to be very, very vibrant and it’s also bumping into certainly in our work, and I’m sure of many others with other kinds of new approaches such as machine learning being combined and big data in a way that’s really exciting.

Jakob: Fantastic. Thanks David. On that last point, I would like to actually just pick up a bit because machine learning and especially the overlap with AI is also something that we’re looking into with great curiosity at the Decision Lab. If I would ask you as a followup like specifically, what the BIT currently is looking at when it comes to applying or the overlap of behavioral science and machine learning, what comes to mind?

David: We ourselves have done and published, you may know a number of, if you like interventions, explorations using machine learning where some people have moved even inside the field, so people like [Sandra 00:08:04], who’ve gone in various in some of these issues. Then maybe because it’s a very empirical approach, people get drawn into it. We started using it in a number of ways. One is that of course you can use these massive data sets, including around people’s behaviors. Often we might of almost exhaust data, which turns out to be key and predictive. An example would be revisiting the original New York Bloomberg work, when trying to work out where to send your inspectors. What are the clues that telling you that a doctoral school might be not very good, and that you can use these clues and behavior and very large data sets to identify some of those issues. That’s a pretty important application.

David: It can be used on the individual level, which we’ve done some stuff. We’ve done work identifying cases or individuals, kids at risk where the system has wrongly marked them in retrospect, as mark for no further action where we should have done something more. One of the reasons why it also comes together, I should explain, I’ll just run with a medical example, is it’s all very well identifying the clinician that appears to be underperforming in some significant way. In other words, they’re missing diagnoses, et cetera. Well, at the same time, remember we’re very busy developing interventions. That’s a great deal of what we do including say for clinicians. We’ve done work for example with clinicians that are over prescribing antibiotics, or I just mentioned earlier the example of clinicians who might be under referring people who have symptoms that may well be cancer.

David: Well if you brought those two things together, that becomes very powerful. We can identify where there’s under-performance in a systematic way, but then we can do something about it. One of the holy grails there also, which is certainly active area is can you work out what intervention would work for who? It becomes much, much more tailored interventions to help support someone. For example, a disadvantaged student in college where everyone will make sure they get there, or when they get there, help them perform best or what intervention would be best for which students to make sure they get the best out of their educational experience and do best in life. You know what I mean? It might not be the same treatment for everyone and using these techniques enable us to do if you like precision interventions or [inaudible 00:10:42] might have talked about personalized defaults in some way.

Jakob: Right. Okay. That makes perfect sense. I think one thought that obviously pops into my mind, but we’ll get into that in one of the further questions is, the more you individualize interventions, the more potentially complex testing becomes. Right. Some of the organizations that we’ve been working with or hearing about, what they like about the behavioral science approach which often is that they can somewhat use it on scale and therefore use some type of cost effectiveness when it comes to testing that versus doing the individualized approach. Again, I’ll ask you a question on that a bit further down the line.

Jakob: I’d like to just shift gears to talk about the state of the industry. You’ve mentioned previously that one of the most important things where you actually didn’t mention it today, but we know that one of the most important things that the Behavior Insights Team have contributed to behavior science or applied behavioral science is a stronger emphasis on empiricism. While this is obviously something that can be very beneficial to organizations, it also comes with a lot of challenges, especially as more and more nuts and unit spawn, for lack of a better word. At times we hear especially from the private sector and that circles back to the point that I was just making that they don’t have the off needed luxury of time and budgets to conduct complex randomized controlled trials yet they still want to apply behavioral science in a sound, somewhat defendable manner.

Jakob: David, if I was going to ask you, what do you think are the biggest challenges for an organization looking to apply behavioral science in an empirical manner, and what’s your take on how this can be tackled?

David: Yes, so remember it’s not just businesses actually, but governments have the same challenge where sometimes there’s real urgency to act. You have to make a decision. It’s not good you’re saying, “Hmm, let’s think about it and run a whole lot of trials.” You do have to take a view and sometimes what you’re doing is you’re bordering on your history of previous interventions to say, “Well, if we have to take a part in this situation, we think this would be your best runner.” Both private sectoral and governments, one of the implications of this is we often talk about in terms of humility, human beings are complicated and they may operate along the model you think that they may not, and we should be empirical. If you’re going to try something, don’t just try one version of it, try multiple versions and see which is more effective.

David: Some parts of the commercial world, not least where it hits digital and AB formatting, it’s become very widespread, the use of experimentation in fact on an absolutely massive scale as you might know. In other areas, however, it’s not used at all. We’re actually changing in Britain, what’s called the Green Book, which is the Treasury book, which tells policymakers how they should go about appraising options and so on. It used to say, you should do your cost benefit analysis and then try the intervention. It would be a good idea afterwards to evaluate it. It’s now really, we’ve moved to a world where we’re saying, “No, I mean, run, wherever you can, run multiple versions.” It’s not as though a once off, it just never end, experimentation should never end. You build machinery practices that are designed to make it easy and low cost to experiment, in the same way that AB formatting will do that for a website, well, you should do that in many, many areas.

David: I’m going to give you a concrete example. One of our trials we’ve been very proud of in recent years we worked with top researchers at Harvard and others was this beautiful intervention group study supporter for young people where they are asked would they like to nominate two people who will be their study supporters? These people then receive information about what’s going on in school, and in college, and they might want to ask the young person, “Well, how do you find that vertical? Don’t forget to bring your calculator for the test or whatever.” The fantastic results increases attendance, and it increased the pass rate versus people who are going to retake by about 28% on the main trial, which we did. It’s a huge effect size and pretty cheap. That’s great. We could just leave it at that and say, that’s fantastic, but why would we do that?

David: We were also now running variations which is, what happens if the texts go to the individual rather than the study supporter, or which text is the best one to use? Or when would you use it? The challenge is not just for the development of the one off intervention, but it’s also designing a platform and a way of working in your organization to be able to perpetually tune, refine, innovate, try new things. That’s often been a very, that’s one of the biggest steps. Even if you go to the simplest example is like a firm trying to redesign it’s invoicing, or a government trying to sharpen up how it collects taxes.

David: In the early work, often with a big organization, it’s like, well, how would we even run a randomized control trial? We’re not set up to do that. You have to redesign the systems to make that possible, but once you’ve done that, then that’s really, you’re on a whole, you’re going somewhere, you’ve built an engine, essentially, an evidential engine to move forward.

Jakob: Right? I think that’s a pretty understandable and especially when we speak of policy change, when we speak scale, it’s understandable that this has to be tested as good as possible. Your point on why would you stop just at one test if you can then conduct variations that can improve things even further then it is almost obvious that you should be going further into testing, so that resonates well.

Jakob: David, I just want to share with you and I actually also ask you what your take is on the so called just dues. Just a brief example, when I was reading one of the interventions of behavior, what we call it behavioral audits actually in the Comoro Islands in East Africa, and this was about electricity bills and people not paying their bills. We went there and we did some behaviorly informed surveys, focus groups, shadowing exercises. We very quickly found out that there are very simple and almost obvious barriers. For example, people, they have to go to a payment station physically and there’s only one on the Island where they can pay their bill.

Jakob: There we figured out, well, it’s almost a no brainer that if we somewhat make, I don’t know, the bills come to them or if have some mobile payment system that that will improve at least the payment rates. Right? We didn’t, or the project leaders on this particular project didn’t even have any interest in conducting an RCT on this. It’s just something that they call, well, let’s just do it and see how it works. This is a simple example. It was in a pretty small scale project, but for people out there in the behavioral science world, what do you think of this just do it approach?

David: Well, look, I mean, I’ve been a policy maker also for a long time, and there are times when you do have to just do it. Right? It may seem obvious maybe to complicate your trial. There are many interventions or changes we’ve been involved in. Changing towards enrolment and pensions in the UK. Literally more than 10 million extra savers now as well as of course, similar things with 401k. We actually didn’t have an RCT at the beginning of that, although we had lots of evidence and as the data community is overwhelmingly obvious is being effective. On promoting the use of electronic cigarettes where we change the policy business to make them available on the basis that we think we’ve got addictive behavior, you want to introduce a less addictive alternative. There were lots of various bits of information which we were converging on, and we thought this being the right approach, but we didn’t, we would’ve loved to have done, run a pure RCT to check some of the assumptions.

David: That does happen sometimes, but there is one really important selling tree point, which is it’s worth bearing in mind and people like Ben Goldacre have made this argument, which is that sometimes it’s being considered so self evident that something, it’s a no brainer as you say. Of course, you should do the following, and in fact in medicine particularly sometimes is referred to as the parachute test, that it would be unethical to run a controlled trial because it’s so obvious that this thing is more effective. We are going to jump out of an airplane and say, “We’re going to see whether the parachutes are effective.” You can imagine, you and the even numbers will jump with a parachute and you and the odd numbers have no parachute, and we’ll find out what happens.

David: This is generally considered unethical. This is often used as a defence in a number of areas including in medicine, why certain trials shouldn’t be done. What is worth bearing in mind is there are now some very famous examples where people made that argument and when someone finally got around to doing a controlled trial, it turned out that the assumption was wrong. A very famous example has been after having a head injury that you should give a steroid injection, not particularly behavioral though, but I think it was considered unethical to do a trial because steroids would reduce swelling and so on and so on. It was self evident that was right.

David: Two RCTs done in recent years both show that giving steroids to people after head injuries makes them more likely to die. We do have to be serious about the humility. I mean, sometimes you can see before or after measures just seems so overwhelmingly obvious. Wherever we can, we should check and we should also check for other reasons, which is that do we understand what the active ingredient is? Quite often we’re in a situation where we’ll do some kitchen sink, where we’re actually going to try several things at once. Hopefully we say it works, but we’re actually not sure which was the active ingredient. I’m okay about that. That’s a good problem. It seems to me to have like, well, this thing seems to be working really well. Now let’s figure out which of these elements is the most important and then can we optimize it? I hope that’s a reasonable answer. Does that fit with your example?

Jakob: Absolutely David, thanks for that. I mean, if I hear it correctly, it sounds a bit like we really need to, I love the humility point. We need to keep in mind what the potential costs are, right, and what the potential health implications are and if it happens to be health. This is an area where it’s absolutely unimaginable to go ahead without any rigid testing. Maybe areas like the one I gave on the electricity are things that make it somewhat morally more defendable to look at what we call these more just do it.

Jakob: I put that out as a question, but that’s a perfect segue into the next question, which is more about ethics of behavioral science and nudges. As a nonprofit here, The Decision Lab, we’re particularly interested in the ethical dimensions of nudging. One compelling argument we’ve heard for why nudging is ethical or can be ethical is that choice architecture happens all the time whether we think about it or not? Therefore, it’s an ethical imperative to think more deeply and deliberately about how we’re doing this. That’s an very interesting view, but it brings up further ethical questions. If nudging gives you a tool to be more deliberate and empirical in the way you affect people’s decisions, how can we make sure that we do this in a way that is aligned with people’s interest as much as possible? Is the answer to create discourse and let people decide where to be nudged, or should we decide for them based on societal ideas such as you just said, being healthy and or being prepared for retirement? What’s your take on that Dave?

David: I think it’s incredibly interesting. Many of us have wrestled with this. Cass Sunstein of course this thought about it it’s a lot, and we do too. There are lots of levels to it. There are questions both about methods, literally experimentation and testing, which we touched on a bit. Then there are also questions about behavioral science and nudging. Let’s at least acknowledge the ones around either experimentation where some people think it’s or the ethical questions, that are right ethical principles such as everyone should be treated the same in many domains. I personally take a different view, which is it’s often unethical not to have tested. There’s is also a way, how the hell do you know if you’re doing harm or good? That’s a battle that continues to rage and many people don’t take it as self evident that it’s the right thing to do to experiment.

David: You’re also meaning in relation to call behavioral science too especially around choice architecture. It’s still a pretty good argument to run. Well, whether you like it or not, behavioral science is precipitating out. It’s crystallizing out choices. The canteen thing, we never quite thought about it before, but if it’s true that most of your plate is filled with the first three things you see in the canteen, and it really matters what you’ve chosen. You might not have thought about it before, but now it’s suddenly is a real choice and you have to take a view on it. That’s not trivial as to on what dimension you optimize. I think broadly, the North American response, the Cass, the Richard Thaler type answer tends to be, well, why don’t we try and set it before in a way that most people would think it was a sensible way of doing it, and that maybe you yourself would think that, but you still leave the choice in play.

David: That works to a point, but by no means, I think the end of the matter. At the very least, we know that people do make different choices in the moment than the choice they make on reflection. It’s in the famous Danish Worker Study when he won a competition and do you want chocolate or do you want a bowl of fruit? That people are weak and advanced to choose fruits. When the guy turns up and says, “I’m really sorry, this is terribly embarrassing. We’ve lost your form. What is it you wanted? The vast majority of people will flip from saying, fruit to now say actually it was chocolate. In a moment they’ve chosen chocolate, right?

David: Which was right and which was wrong even within ourselves is not a trivial matter and myself I think there’s another level beyond it, which is not just the choices that you make for yourself on reflection and then, or in the form of commitment devices. But collectively, what are the decisions we make as a society in or group when we talk to other people, which what’s the right way we should try, and set the defaults, the norms when we’re trying to encourage? For that, I mean, my view is there’s no escaping. We should use proper democratic mechanisms. In fact, we should go further than current democratic mechanisms in many areas where we should ask the public that this is an interesting dilemma. This is our shaping behavior. What do you think, how do you think we should set this? Should we remove chocolate from by the checkouts in supermarkets?

David: Well we could just make a view on that, but maybe we should get groups of customers and ask them, what do you think having shown you the evidence and how it changes your behavior? They might well come back and say, “Well, get rid of all of that,” or they might say, “Well, couldn’t you mark the isles and have guilt free isles and then have some, which are like, that’s where the chocolate is and you’re with your kids, you can choose one or the other.” My point about it is, I didn’t think you just handle these arguments with a very elegant ethical internal argument. You do need to engage the public directly and have them be involved in shaping this, as sometimes put, it really matters who is nudging the nudger and we need to have strong mechanisms both in government but actually also in relation to private sector players.

David: If you’re, if you’re a big social media giant and you’re making quite important decisions about choice architecture that’s affecting teenagers and lots of other people, how does the community get to express its view about setting the parameters and how long you’d be online or how you set the toggles. These are important questions, not just for governments but also for companies too.

Jakob: Right? Nudging the nudgers, I love that sentence. I’d like to just ask you a quick followup on this. Also the idea of testing or asking the people; Asking them for their decision on or their views on whether this is ethical or not ethical. I mean, one thing while you were speaking that came to mind is that nowadays you have these easily accessible crowd sourcing platforms like Amazon’s Mturk (Mechanical Turk) or others where you can quickly test across a pretty large sample sizes some ideas. I wonder, David, for you guys at BIT because one thing that I’ve heard from a friend that I’ve had discussions with who doesn’t know a whole lot about behavior science, but when he heard about the whole concept of nudging, he says, “Well, so is it, do you guys then to decide what the right behaviors are?” We just had that discussion that it’s not us.

Jakob: I guess what I wanted to ask you is that, first of all, do you guys already start using these kinds of methods to when you have an intervention idea to ask the general population for I guess on the average consensus, but then if you do so what are the potential payoffs, the trade offs because you can then probably argue that that potential sample size didn’t have access to the research that actually shows that the general public opinion is not the right one.

David: Yes, indeed. By the way, there’s a specific thing you mentioned in there which is to do with online testing versus, we ourselves have built a platform called Predictiv which we use to test certain things like terms and conditions so people understand is that they’re very good for things like comprehension. You were asking something slightly different, which is not just what’s the effect of this in prime one way or, but do you then ask people about to what extent do you think it’s right, it had this impact on your behavior? I go back to this point that we can ask people to be reflective about it, but there are so many choices in the world. We really all want to spend all our time deciding all of these things, setting toggles, setting switches, and either individually or do it collectively. And Cass in fact has made the argument in the US context sometimes an important principle is that people might decide not to decide, they just got better things to do in their life, which will be true in many domains.

David: Our house view has long been, where possible, and we would love to do it frankly more extensively, is you actually bring people together and you out that. We run a couple of deliberative forums and such as in Australia, where we’ve taken a sample of probably, I mean, I could give you, we do one on obesity for example where people are brought together, they were showing lots and lots of the evidence. They could then themselves choose which are the experts they wanted to hear from and they also heard from industry or from academic players and they were asked then to give a view, having heard all available science and the other aspects of industry view, what do they think would be the right policy to move forward?

David: Certainly they get to shape it or I love a more recent version we did, which I thought was totally fascinating, is in the world of social media where that caused massive effects in this massive testing being done by many commercial players who are hatching retention, and they’re feeding, advertising and so on. That’s all I’m saying, it’s right or wrong. We do that set has certain kinds of facts. We put together a group of teenagers and we got them really to talk about a combination of exposing them to various kinds of facts, and also learning from them about what they were doing in their behavior, and then involving them in essential design sprint to help co-produce or co-design what that could mean.

David: That can vary from what could they themselves just do, the decisions they would make around, if you’re spending the first hour of every day on Snapchat completing your streak, you know how do you feel about that? Can you take back control? Can you redesign products, what would it look like? What are the kind of leavers could be built into them that you as an individual user then could take control over? Also some of them are, as I said, not just individual choices, they’re collective choices. How would you wish to shape the character or social interaction on social media? That’s something which we have to decide together, not just as individuals.

David: Yes, my own view, I reckon that’s a long time. I’m quite a big believer in essentially forms of enhanced democratic deliberation that enable, I mean, frankly, not everybody, because we got better things to do in our lives, but samples of the public to walk that journey with us, hear different views and then let them make the decisions essentially. Let then give us permission if they think that to be right to make the change, put that, the power back into the hands of individual consumers, families, citizens to be able to shape the world, which in turn shapes them.

Jakob: Got it and that makes perfect sense. To shift gears David, I’d like to now just, I think a lot of our listeners are interested in, they’re fascinated, they’ve read books like Nudge Shunned and maybe your book Inside the Nudge Unit and others and they are now interested in how to have a career in behavioral science? I’d like to speak to you just a bit about what it takes to have a career in behavior science. Behavior science is an appealing career choice for many, especially those who want to sit at the intersection between various fields as well as between theory and application, but for that very reason also to it’s somewhat tough field to prepare well for. Many of our listeners have asked us how they can best prepare for the field. With that in mind, what skills do you think can apply to behavioral scientists will most likely need to have in the next 10 years? And how can they best prepare for that?

David: Yeah, it’s a great question. It is a very, very exciting, vibrant field. Then to some extent, of course it’s evolution of existing fields. I actually explain for our listeners that I myself am a recovering academic, you see sometimes said that I used to like that I’m tenured at Cambridge, so I’m an escapee. One of the frustrations with academia is that tends to put things into narrower disciplines. Because one of the reasons we underlined earlier on in the naming of the Behavioral Insights Team, what we’re going to call it behavioral economics, etc. Danny Kahneman know goes well you can’t call it behavior economics, why is the economist is getting all the credit, what about psychologists, anthropologists? A range of disciplines have things to add and one of the things I hope we’ll see and we already do start to see is that for students that today and the graduates is that there are offerings which are blended disciplines. You can do psychology and economics with anthropology.Why not have a bit of design in there as well, or how to be an entrepreneur?

David: There’s more blended approaches coming forward. There are certainly at least some institutions, and some masters in the world which are offering that combination. But the truth is quite often people have to do it for themselves, and for us in many units, we ended up trying to combine the teams by choosing different kinds of people, and bringing them together. The foundations of your question, what do you really need? I mean, you need passion and curiosity about human behavior. It’s not just very practical stuff. It’s also incredibly interesting to understand who we are, what’s the nature of human beings and our own behavioral decision making? But others so no psychology but more generally, what makes people tick, what is really driving their behavior? There is some kind of core knowledge about actually empirically know what drives behavior.

David: For us, we’re very much rooted this as in an empirical tradition. I guess you might try and do it in a different way, but for us that means understand methods, understand what a randomized control trial is and how a step wedge might be done or propensity score matching. These methods, things, particularly quantitative but also qualitative analysis, being able to analyze data. Data and those tools, they’re the key to the universe really. They can help us know what’s going on. Don’t shy away from those courses, those options in college, get into them, it’s just incredibly rich.

David: Of course the behavioral science which we use and probably you use is not just an academic matter, it’s also about applying it to real world issues. That’s not everyone’s cup of tea. Some people are just very interested just looking at the science and abstraction. For us in that sense, we’re engineers. We’re not pure and natural scientists where we’re trying to use these tools to really address problems in the world and have some impact. That’s very exciting I think, but it’s not for everyone. Yes, at least that trilogy, behavioral science is a cool, essentially empirical content we’ve got so far. The methods build up those methods, but also interest and curiosity, and exposing yourself to the real world challenges, the real world administrative systems, and are you interested in actually having an impact? That isn’t for everyone, as I think for me are incredibly exciting combination.

Jakob: Right. Thank you David. If I could just ask you, is it a fair statement to say that while assumptions and views depending on the academic field you’re stemming from may differ, there still needs to be an agreement on the methods that will test those assumptions in order to find some somewhat of a consensus within behavioral science?

David: Yes. I mean, there are in a crazy way, there are some emerging consensus, errors in frameworks so we use lots of different frameworks, but it’s just one of the core insights I think is that if you understand and think about what’s happening in a human brain let alone in the society, is there are just multiple influences. There are so many processes. In general, the scientific method is one based on humility, right? It’s acknowledging the extent of your ignorance and that some of the best scientists will say, “Everything they ever did got proved to be wrong.” But that’s okay. It’s just the hour glass reducing the time. It means that we should be embracing empiricism.

David: Again, Philip Tetlock’s hard work illustrates this very well. There are lots of people in the world and lots of pundits who have very confident, nice, elegant theories. I’ll give you a very strong view on something, often there are predictions around the pool as opposed to people who acknowledged the world and suddenly human beings being more complicated and they have doubt, they embrace doubt if we think it exists, but we’re not really sure. You are perpetually testing learning, and trying to evolve our approach and our understanding.

David: That seems to me, it should be true for any discipline, but for behavioral science it is especially true. If you’d like to create a different one, when I started my life many years ago as a natural scientist, whatever, and are a lot of physics feels like you start with the world looking incredibly complicated and then you are able to elucidate these underlying principles or laws which are at a simpler and cleaner in some ways be it through the major forces or quantum. Well, Psychology is almost the other way around. We start off thinking, oh, human behavior is not that complicated, but the more you dig into it even as simple perception tasks, you realize there are multiple processes that are running in parallel and it gets more and more complicated in some ways the more you dig. You better get used to or expect a lifetime of learning. You’re not going to just get that and say, “Oh, well that sorted, I understand at all.”

Jakob: Great, that makes a lot of sense. As we are now coming slowly towards the end of our podcast today, I would like to just ask you David a bit about your take on behavioral science, especially as it relates to the private sector. Let me ask you, the reason why I’m asking you this is because, as I think a lot of our listeners know, the applied behavioral science and nudging world really started in the public sector. I mean, your cabinet team was the first mover, but then quickly nudge units at public, multilateral such as the World Bank, the United Nations or other governments followed. Today I’d say a lot of governments employ at least one or two behavioral scientists, and nudging or applied behavioral economics. That shows that it seems to be best suited for effecting public policy.

Jakob: However, after observing the success of nudge units across governments, an increasing number of private sector companies have also followed suit with their own nudge units. I’ve shared with you an example of The Decision Lab working with some private sector players on building out their own nudge units and here at left and right people are saying this bank, this insurance company, they’re all interested in building on their own nudge units now. David, what’s your take on the private sector’s increased appetite in doing this applied behavioral science work for their businesses, but also given the ethical conversations we had earlier, and where do you see behavioral science evolve, especially in the private sectors in the years to come?

David: It’s an incredibly interesting question. I mean, frankly, whether you are in the private or public sector, you’re going to bump into this. Some I’m sure would say what we now call nudging and behavioral science, there are people in the private sector who would say, well, you can see examples of that going back to the 1930s. There are many industries, even if it wasn’t as overt which for a long time have used behavioral science, not always for good, you might argue. The gambling industry is very sophisticated in its use of behavior science, has been for some time. It’s been up against game for sure. Tversky and Kahneman’s original work, it’s about how people make risk judgments, so no surprise the gambling industry has got an interest in making you think, overestimate your chance of winning and feeling really good when you get near misses and not feel too bad when you lose in filling in all the blanks.

David: One of the issues that it definitely hits is what’s the ethics? If the ethics apply in relation and absolutely to where governments should or shouldn’t be nudging, they absolutely apply on the private sector side too. Often through market evolution if not overtly, industries have developed often quite sophisticated approaches in one way or another, you have cash backs will be simple examples that you know that you can offer a big cash back for your new printer, or your whatever it would be because most people will overestimate the chance that they will get around to doing the claim, claim the cashback. Models can be built on that basis, a way in which you sell a product. There’ll be lots of these issues which essentially use behavioral science.

David: I think one reason why governments, whether they want to or not regulate stuff pooled into this area, is to try and police and work out what are the parameters of acceptable nudging inside the private sector. In essence, like any form of knowledge or technology, behavioral science can be used for good or for bad in same way that biochemical technologies can be used to make amazing new precision medicines, they can also be used to make neurological agents to kill people. You can use behavioral science to really help people save more and live more healthy lives and so on but you can also use them in order to scam people and take advantage and sell financial products that people don’t really need. Cialdini’s work of course was a beautiful earlier illustration of those sorts of abusive practices as much as good practices. We will need to be aware of that possibility, and partly the role of government and regulators is to set the parameters to make sure that stuff isn’t used and abused. Citizens themselves of course can learn to recognize abusive practice and adapt and learn as well.

David: BIT we are generally by design, you may know we’re actually at a social purpose company ourselves these days and by our limitation we don’t allow to just do work unless there’s social impacts to it. Well functioning markets are also a part of the story, indeed just the way I conclude on that is some of the most interesting problems and puzzles in behavioral science are the interface between public sector and private sector operations. Designing markets that need that good companies and good products really come to the fore and grow. Right? What does that look like? How do you design two-way platforms that have the right characteristics that support innovation and growth? Do such platforms can either be built by governments or indeed, sometimes by private sector players themselves.

David: It’s incredibly interesting and important sets of questions this, and it’s funny people often are more familiar with asking about the ethics of nudging governments, but whether we like it or not, citizens are being nudged day in, day out by all kinds of players on both commercial side as well as the public sector side and indeed frankly by our friends and our relatives and lots of other people too. It’s part of life, and it’s my hope for many of us to try and use this technology, this knowledge for good in the world as opposed to for bad.

Jakob: Fantastic. A quick follow up, one thought that we have been thinking through is, is there a time potentially coming to have some type of global ethics body on nudging? I’d be very curious to hear your thoughts on this. I mean, I think we’re going to can agree that whether it’s in finance and law and in any of these larger fields, and it’s a process that keeps on being improved constantly. There are some type of governing bodies, ethical institutes that try to somehow influence the ethical aspects of these fields. Nudging or behavior science in itself as it is now being applied, it’s in that sense of relatively new field I think that’s fair to say. What is your take on this idea that maybe people would feel more relaxed about the ethical question and implication of applied behavioral science if they knew there was a neutral, third party type of ethical governing a body overlooking this?

David: I mean, are you asking an empirical question, which I’m not just on the specifics of the client whether that’s true. I mean, if you think of George Loewenstein’s work on privacy and so on, sometimes when you reveal the people like, oh, there’s something else you should worry about, don’t worry there’s a body who is going to sort it out. Actually that’s support which say stop worrying. Anyway, that isn’t just to say it’s still the right thing to do.

David: Maybe, I definitely think there’s something we have to think about whether it’s unique to nudging behavioral science or not. I have a question mark. I would say very similar questions apply say around machine learning and the use of algorithms is that in the behavioral science nudging bucket, I mean, it overlaps with it, but it’s definitely not the same. These are experimentation in relation to pharmaceutical practice. That’s an area where there’s been evolving regulatory practice over a long period of time.

David: Experimentation in general let alone behavioral science, has raised some question. Yeah, should it be? Yes, I think so. Certainly within disciplines you would hope within industries, sometimes governments themselves will set those parameters and in the end I think, well, who sets the rules and I go back to my phrase earlier, who nudges the nudgers? It is important that in any domain that you have divisions of power and responsibility and that no one is completely sovereign, everyone should answer to someone else. I think it is important and my own view is that governments and indeed industries will have to strike in their governance arrangements. They should do so around these kinds of practices where their customers or citizens are more able to shape and determine them.

David: That’s not a view which everyone agrees with. In Britain at the moment we’re having some interesting constitutional issues raised by the contrast between a referendum and the parliament having a different view potentially. My view is we should have many places, a third chamber, which is a sample of the public who come together and take on particularly around lifestyle choices and decisions, where they help to shape and choose what they would be like. I guess what I’m saying is that one of the mechanisms which you start to see evolve is, yeah, people should publish practicals ideally, and sometimes you’ll have some ethical boards like you’d see in the academic world to say, “Is it okay to do this or is it not?” Remember we do sometimes turn to our democratic structures to do some of that, but it’s not just a deeper set of questions. Yeah, it will be great to see more of that. It’s just not limited to nudging, there’s lots of other things we do too.

Jakob: It makes sense David. We’re coming towards the end of the chat, and we would like to ask you what short or longterm future you envision? I think a lot of listeners are curious, what do you envision for the Behavior Insights Team and what types of projects your team is most excited about in the coming years?

David: Yeah, so that’s fair enough. I think the things which are the most exciting, and that probably BIT research team is, one of our roles as we often help other governments and other bodies, sometimes private sector nowadays to build these capacities. Partly our role is also to keep pushing what’s the frontier. There’s a lot to be said in just to scaling these approaches. I’ve often said and I serve ministers in Britain has what’s called National Advisor and Networks, which is to take that empiricism and try, and figure out what’s the best way of teaching kids? What’s the best way of reducing crime? That’s not just a behavioral matter. I’m a great champion of this. Other countries like Canada or Australia are also coming on board about that.

David: I think that’s very exciting. I sometimes think that history will say the most important thing that BIT did for the world is it actually helped to push that empiricism regardless of behavioral science. Within diverse science itself, and boy, the world there’s no shortage of great challenges. I think for ourselves, I think with many colleagues across the world, we’re starting to move. Yes, let’s carry on doing the nuts and bolts issues like getting people pay their tax on time will save more or to turn down their thermostats, but it’s also moving onto some of the really big wicked issues, obesity, improving the functions of economy. I do think one of the ironies about behavioral economics specifically is that it at least apply to economics where there are lots of behavioral based market failures, and I think it has a lot of quite profound implications for economic policy that haven’t been thought through that we think about them.

David: There’s a lot of issues to do with the flesh and blood of humanity that get expressed as wars, I mean two wars every year starting. We’re really into, well, what can you do to use behavioral science to stop conflicts and to reduce the reignition of previous conflicts which is very exciting. Social mobility will be another grey area and one of those great deep issues. What can we do to help in multiple aspects of the subtle forces that are holding back some of our kids be it in terms of their early years, there are 30 million world problem, or how can you get parents to engage more early on and mesh with their kids through to the essential life skills that we should be nurturing in young people right through to social capital and network effects or saving. How can we use Fintech to support rainy day saving and will the consequence be as great as we think they might be?

David: It’s a fairly exciting, vibrant time to be engaged in this for people and very early on their careers to think about what to do and have to think our kids will look back and just think, what the hell did you use to do guys? You used to try and figure out the world using these very, very strange models of human behavior, which were very unrealistic, and you never tested things either. No wonder that the world was such a mess. I do think, and I think some of those exciting stuff going on right now is and going forward are these very big challenges and trying to measure ourselves and take on some of these very big human challenges and see if we can do something about them. Why not? Hey, we might as well give it a try.

Jakob: Got it. So I think the good news from if I heard you correctly, is that we’re not going to run out of work anytime soon in the fields. It looks like there’s plenty of areas where behavioral science can be of value add.

David: Yeah, indeed maybe just a concluding comment is that one of the telltale signs we’ll know if we’re successful is because we weren’t really separated it as behavioral science anymore. You know what I mean? Empiricism won’t be “oh we do experiments”. That’s just how you should do public administration. That’s how you should run things. Companies will just have this built into them, governments will have it built into them and in the same way we don’t really talk about digital or technology separate. We’ll get to that point, but we’re not there yet, that’s for sure.

Jakob: Got it. Well, fantastic David. I want to thank you for all your insights that you’ve shared with us today. Is there anything else you would like to let the audience know before we wrap up?

David: No, I think it’s fantastic there are people interested in this area as it sits. I think we know enough to know it’s a very, very powerful tool. It’s nearly a decade that the Behavioral Insights Team has been running in the UK and helping other countries now. Bring it on, come all, come when the war is won, and also probably let’s not lose our fascination with it. It’s also about who we are as human beings and the nature of our society. It’s an incredibly intellectually interesting journey that we’re all on together to understand who we are and what makes ourselves tick. Not only is it practical, but it’s also interesting and we should try and hold on to behavioral insights at home.

Jakob: Great. Personally, humility, nudging the nudgers and let’s not lose our fascination with it are the three big takeaways I’m taking away. I want to thank you so much David for your time today and I wish you, and the whole BIT all the best for 2019 and beyond. Thank you.

David: Thanks so much. Yeah.

Jakob: Bye.

The Halo Effect in Consumer Perception: Why Small Details Can Make a Big Difference

Many of us have experienced a situation where people discounted something worthwhile that we’ve worked on, simply because one part of it was flawed, even if we didn’t think that that part was particularly important.

For example, imagine the following scenario:

You and your team have been developing a product, and after months of hard work it’s finally ready. Excitedly, you send it out, and then sit back and wait to receive what you’re sure will be glowing user reviews.

But soon the reviews start coming in — and you’re disappointed to see that they’re nothing like what you’d hoped.

“I didn’t like the aesthetic” seems to be the common theme, though it comes in many forms, from the “the design looks bad” to “the color scheme is ugly”.

You understand this to some degree: your focus was on functionality, not looks. Yet, even when users offer feedback on other aspects of the product, it is all much more negative than you expected. It’s as if their entire perception of the product has been influenced by their initial dislike of its appearance.

This scenario can take many similar forms. For example, maybe instead of discovering this issue during testing, you encounter it during the launch of your product. Or maybe instead of a product, it’s a presentation you’re giving to interested buyers. It doesn’t matter much, since regardless of the exact scenario, the problem is the same: a single attribute, such as the aesthetics of something that you’ve created, can substantially affect people’s overall perception of it, even when it comes to other attributes that have nothing to do with it.

The culprit in such situations is a cognitive bias known as the ‘halo effect’, which can cause people’s opinion of something in one domain to influence their opinion of it in other domains [1][2]. A commonly used example of the halo effect is the fact that when we meet other people, we often let one of their traits influence our opinion of their other traits. For example, research shows that physical attractiveness plays a significant role in how people perceive others, even when it comes to judging traits that have nothing to do with looks. This means, for instance, that people rate attractive people as having a better personality and as being more knowledgeable than unattractive people [3][4].

However, as we saw above, the halo effect plays a crucial role not only in how we perceive people, but also in how we perceive other things, such as products. This is important for business and organizations to understand, since it means that consumers’ perception can often be meaningfully affected by the halo effect. This usually occurs at two main levels — the product level and the brand level —and in the sections below we will review examples of both, together with their implications.

The halo effect at the product level

In this context, the halo effect means any attribute of a product can affect how people perceive its other attributes, as well as how they perceive the product as a whole [5][6]. For example, an unappealing visual design can cause people to perceive the reliability of the product in a negative manner, even if there’s no direct connection between these attributes. Similarly, an appealing visual design can cause people to view a product in a more positive light, even when it comes to attributes that aren’t related to its design.

To illustrate how the halo effect can influence consumer perception at the product level, I worked with the researchers at The Decision Lab to run an experiment on the topic. In this experiment, people were shown one of the two following versions of what they were told is the login page for an app:

The participants in the experiment were then asked to rate several aspects of the app’s expected attributes, as well as its aesthetics. The main findings of this test are summarized in the following infographic:

For more information about this experiment, both in terms of its results and in terms of methodology, see this complementary case study.

In short, the results of the experiment suggest the halo effect played a noteworthy role in how people rated their expectations of an app after looking at its login page. Specifically, when people liked the aesthetics of the login page, they tended to rate the app as being substantially more likely to be intuitive, reliable, and secure. That is, people extrapolated from the design to form expectations of the product as a whole — despite not having any direct information regarding these attributes, and despite having had no more than a brief look at the login page.

These findings have two important implications: first, that a single attribute of a product, such as its aesthetics, can greatly affect people’s overall perception thereof; second, that people can form impressions based on very little information. In this case, a single look at an image of a login page was enough to substantially influence people’s perception of its aesthetics, and their expectations regarding its other attributes. Of course, as more information about the product becomes available, people’s overall judgments may change — but the importance of first impressions (and salient features) should not be underestimated. 

Overall, the key lesson here is that consumers almost never fully disentangle the different attributes of a product from one another — meaning it can be costly to cut corners, even on features seen as unimportant. Moreover, first impressions count, whether it comes to a package for a gadget, a landing page for a website, or anything else associated with your product. Finally, we should note that the same forces at play in our example can equally operate in inverse, meaning the halo effect can positively affect consumers’ perceptions of a product. When developing a product, consider how this knowledge might be leveraged; a simple improvement to the color palette, or the spacing of text, can meaningfully affect how your product is perceived, and in turn your bottom line. Pick the low hanging fruit.

The halo effect at the brand level

It will be no surprise by this point that the halo effect is also present in our perceptions of brands [7][8]. For example, well-executed corporate social responsibility programs have been shown to lend a positive halo effect, which can reduce people’s propensity to take action against a company in light of negative news about it [9]. Furthermore, social responsibility programs in a domain that is visible to consumers, such as recycling, can cause consumers to form a positive perception of other aspects of the company, about which they have little or no direct information, such as its production process [10].

Of course, as we saw above, this effect can also be negative. This means, for example, that poor communication with your community or poor customer support can cause consumers to form a negative perception of the reliability of your products, even if the two things aren’t necessarily related to one another, since consumers will not always disentangle their perceptions of your brand. Accordingly, anything from how one of your employees behaves on camera to how your support department deals with a request for a refund may shape, for instance, the perceived functionality of your products. As such, when it comes to ensuring that you have a strong brand, give proper consideration to every aspect of your operations — especially those that you might otherwise neglect.

The halo effect in a broader context

In this article, we focused on how the halo effect plays a role when it comes to products and brands. However, the halo effect can also play a role when it comes to other entities that have to do with consumer perception, such as specific store locations or public figures within organizations. 

It’s also important to note that, as with all elements of behavior, there will always be some variability in the manner in and degree to which individuals are affected by the halo effect [11]. For example, research has shown that in some situations, while aesthetics first influence perceived usability, over time this relationship can flip — meaning that usability affects whether people like the aesthetics [12]. As such, while it’s important to account for the halo effect, and remember that it can play a role in any situation where human assessment is involved, it’s also important to remember that our ability to understand and predict its influence is imperfect, and should be treated as such.


Overall, the key points for you to take from this article are the following:

  • The halo effect is a cognitive bias that causes people’s opinion of something in one domain to influence their opinion of it in other domains.
  • The halo effect can apply when it comes to the perception of both positive and negative factors.
  • The halo effect can play an important role at the product level, where a certain attribute of a product, such as its aesthetics, can influence how people perceive its other attributes, such as its reliability — even if those attributes are unrelated.
  • Similarly, the halo effect can play an important role at the brand level, where people’s perception of one aspect of an organization, such as its customer service, can influence how people perceive the rest of its operations and the company as a whole.
  • There is variability involved in the halo effect, so it may be difficult to predict the exact degree or manner in which it will affect people in any given situation.

Itamar Shatz is a PhD candidate at Cambridge University. He writes about psychology and philosophy that have practical applications at