Mind Your Heart: Irrationally Giving

You see a drowning child in a shallow pond while walking past it. Instinctively, you wade in and pull the child out. Your clothes have spoiled from the rescue. This cost may be insignificant to you, yet life-saving for the child

Peter Singer, a moral philosopher, uses this thought experiment to make a cogent argument. We ought to prevent bad things from happening, if we have the power to do so without sacrificing something of comparable moral importance. As we know, donations to credible charities definitively help prevent avoidable death and suffering. Logically then, we acquire the moral obligation to contribute because the philanthropic cost is insignificant to our standards of living. Although this argument seems shrewd and sober, it does not coincide with the way donors’ make decisions about their charitable giving. Several neuroeconomic findings reveal the importance of emotional underpinnings in charitable giving.

Emotions in Moral Decision-Making

In one study, Greene and colleagues (2001) asked participants to judge the suitability of actions in moral dilemmas. The first dilemma described a scenario wherein a trolley is running down a track on which it will kill five people. Participants were given two hypothetical choices: they could either allow the train to proceed on the track by not intervening, or they could push a lever to re-route the trolley, saving those five people but killing a single person on the other track. In most cases, participants thought it was appropriate to push the lever to save five people at the expense of one. Interestingly, in a slightly varied dilemma, participants were asked to decide whether it was appropriate to push one person onto the track to stop the trolley and save five people. Computationally speaking, the same number of people are killed and saved across both dilemmas, yet, participants were significantly more reluctant to say that pushing a person onto the track was morally appropriate. The researchers argued that the second dilemma engaged emotional systems that were not active in the first ‘lever’ scenario. Greene et al. (2007) argued that these intense emotional differences altered judgements. This clearly exemplifies our susceptibility to irrational behavior in moral decision-making. Specifically, the context of the choice has a disproportionately strong impact on how we craft our personal judgements. In the context of a critical life and death situation, the framing of pushing a person instead of a lever has immense ramifications on actual behavior.

Recent advances in neuroeconomics research have led to a more formal examination of this effect. Neural evidence shows that brain regions associated with emotional processing, including the posterior cingulate cortex, medial frontal lobes, and posterior parietal lobes, were more active during personal moral dilemmas involving direct personal harm (Koenings et al., 2007). Case studies of those suffering from brain injuries suggest that emotional perception is a key determinant in social decisions involved in charitable giving. Specifically, Koenings et al. (2007) found that lesions to the orbitofrontal cortex (a region associated with emotional-processing) were associated with more ‘rational’, cold, and utilitarian judgements in deciding tradeoffs between one individual’s life over another. Emotionality is clearly a key component in moral decision-making. As such, the implications of an emotional nudge is discussed further.   

Evoking Emotions In Charitable Giving

Practitioners aiming to increase charitable donations should leverage the identifiability effect. Research shows that identifiable aid recipients elicit more empathy than otherwise unidentified individuals in need. One study used the ‘dictator game’ to study this effect. In this behavioral game, a participant has the choice to give a portion of their money to others or keep the entire amount for themselves. Results suggested that giving was higher when the recipient was identified by a last name. Small and colleagues (2007) have argued that donors become entranced with specific identifiable victims. This phenomenon played a critical role when, “Baby Jessica” received over $700,000 in donations from the public in 1987 when she fell in a well near her Texas home. Similarly, £275,000 was urgently raised in 2003 when Ali Abbas, an Iraqi boy, was severely wounded. Interestingly, the same is true for animals as $48,000 was contributed to save a dog stranded on a ship adrift in the Pacific Ocean. These anecdotes show that the identifiability effect harnesses donors’ biases in prioritizing highly moving stories as they are easier to process and relate to than otherwise abstract statistics.

This effect was studied by Kogut and Ritov (2005) in an elegantly designed experiment.


The AI Governance Challenge

Participants read a story describing either a sick child, or a group of sick children, whose lives are under threat. The survey described a new drug that could cure the disease. Participants were told that this drug was expensive, and unless 1,500,000 Shekels (about US $300,000) was raised soon, the children’s lives would not be saved. Participants were asked about their willingness to contribute under distinct conditions (i.e., presentation by name, age, picture, or all three). Results suggested that the identifiability effect was largely restricted to single victims. Willingness to contribute also increased with more vividness as identifiable single aid recipients introduced by their name, age, and image received significantly higher mean scores compared to less vivid conditions (age, sex, or name only).  

In conclusion, the identifiability effect is double-edged. It leads to less donations to unidentifiable aid recipients, but it can also be strategically leveraged to design more effective campaigns and solicitations with more vividly identifiable aid recipients. As Baron and Szymanska (2010) suggest, “Victims all have names. The fact that we are aware of one of them is an accident.” This accident, in the increasingly competitive space of charitable funds, is becoming an invaluable commodity.

Why You Might Not Be Sticking To Your Plans

We don’t stick to the plans we make

Plans are a sign of productivity. We plan our days at work, our weekends, and when to do a food shop. We also plan for our long term goals.

But despite all of this planning, we often don’t complete our plans. We finish projects after our deadlines. We stop going to the gym after a few weeks. Our ‘healthy eating’ regime becomes difficult to keep up with.

Research has even shown that people usually have to make New Year Resolutions at least 5 times before they actually achieve what they set out to do in the first place.

There are several reasons why we don’t stick to our plans. Biases can make us underestimate what we need to do. They can also make us overestimate our abilities regarding how much we can achieve. And sometimes, we just make bad plans.

We don’t allow for enough time

Why do we underestimate how long it will take us to complete things? In his research, Roger Buehler and his colleagues highlighted a ‘Planning Fallacy’.

The Planning Fallacy describes the fact that people tend to underestimate how long it will take them to complete tasks. We do this because we rely too much on what we intend to do. We tend to ignore our own past experiences and so ignore how long it has taken us to do similar things in the past.

However we don’t seem to do the same for others. When we think of how long it will take someone else to do something, we take into account their past experience and are less biased by their personal intentions.

This bias towards our own intentions can lead us to making bad plans and not giving ourselves enough time to complete tasks.

But, asking for someone else’s opinion can help you make a more accurate plan. They will be more accurate with timings and help you see clearly what you can realistically achieve.

We are too confident in our own abilities

Why does basing our plans on our intentions lead us to plan for things we might not be able to quite achieve?

This comes down to the ‘Dunning-Kruger’ effect. This is the idea that when we are bad at something, we believe we are better at it than we actually are.

We do this because we cannot recognize what being ‘bad’ or ‘good’ at the task actually looks like. The skills that we need to be good at the task are the same skills that we need to be able to understand why we are bad at it.

An example of this could be baking. If an amateur baker ends up making a bad cake, they might think that they followed their recipe really well, but that it was just a bad recipe. They do this because they lack the skills to tell them what part of the baking process they didn’t do properly.

It takes more than just believing good things will happen

We can make our plans better by getting our time frame right and being aware of our own abilities. But how do we make sure we then frame our plans in the right way?

Gabriele Oettingen has done a lot of research into the best type of plans. She found that ‘plans’ that are simply positive fantasies can actually be bad for trying to achieve goals. An example fantasy would be ‘I would love to get that job when I quit my job next week.’

She found that these fantasies aren’t enough, and that a key component for successful planning is thinking about the obstacles that might get in the way of achieving what you want. Identifying what stands between you and fulfilling your goal can make you more committed to carrying out your plan and allow you to deal with each obstacle when it pops up.

Oettingen coined the term ‘Mental Contrasting’ for this. You should contrast the fantasies you have with the realistic obstacles that stand in your way. Doing this brings you back down to earth and calls you to act so you can overcome specific hurdles. An example would be ‘I would love to get that job when I quit my job next week, but my CV is not up-to-date yet.’

Breaking down complex plans into simpler ones

Identifying the obstacles that can stand in our way goes a long way to helping us to stick to our plans. However, when these obstacles are complicated or tempting it can be hard to see a simple way to overcome them.

One way to make this easier could be to use Implementation Intentions, or ‘If-Then’ plans. These are plans where you state that ‘if’ something happens, ‘then’ you will do carry out a certain behavior. For example: If I see a tempting unhealthy snack, then I will eat an apple instead.


The AI Governance Challenge

By specifying the actions you will carry out in certain situations, Implementation Intentions take away the effort of trying to stick to your plan. The ‘If’ part sets up what situations or triggers will help or hinder your goals, and then ‘then’ part specifically outlines what actions you will take to achieve the goal when they pop up. These If-Then plans make you implement what you intend to, and so can be particularly useful for complex plans.

Making better plans

We can underestimate our workload, overestimate our abilities and focus too much on what we want rather than what we need to overcome. All of these can contribute to why we might not stick to our plans.

To make your plans better, ask friends or colleagues for help planning, look at your past experiences, and identify the obstacles that get in the way of your final goal.

Society’s Biggest Problems Need More Than a Nudge

This article originally appeared in [https://theconversation.com/societys-biggest-problems-need-more-than-a-nudge-58832] and belongs to the creators.

So-called “nudge units” are popping up in governments all around the world.

The best-known examples include the U.K.’s Behavioural Insights Team, created in 2010, and the White House-based Social and Behavioral Sciences Team, introduced by the Obama administration in 2014. Their mission is to leverage findings from behavioral science so that people’s decisions can be nudged in the direction of their best intentions without curtailing their ability to make choices that don’t align with their priorities.

Overall, these – and other – governments have made important strides when it comes to using behavioral science to nudge their constituents into better choices.

Yet, the same governments have done little to improve their own decision-making processes. Consider big missteps like the Flint water crisis. How could officials in Michigan decide to place an essential service – safe water – and almost 100,000 people at risk in order to save US$100 per day for three months? No defensible decision-making process should have allowed this call to be made.

When it comes to many of the big decisions faced by governments – and the private sector – behavioral science has more to offer than simple nudges.

Behavioral scientists who study decision-making processes could also help policy-makers understand why things went wrong in Flint, and how to get their arms around a wide array of society’s biggest problems – from energy transitions to how to best approach the refugee crisis in Syria.

When nudges are enough

The idea of nudging people in the direction of decisions that are in their own best interest has been around for a while. But it was popularized in 2008 with the publication of the bestseller “Nudge” by Richard Thaler of the University of Chicago and Cass Sunstein of Harvard.

A common nudge goes something like this: if we want to eat better but are having a hard time doing it, choice architects can reengineer the environment in which we make our food choices so that healthier options are intuitively easier to select, without making it unrealistically difficult to eat junk food if that’s what we’d rather do. So, for example, we can shelve healthy foods at eye level in supermarkets, with less-healthy options relegated to the shelves nearer to the floor.

Likewise, if we want to encourage more people to be organ donors, choice architects can design the form we fill out at the DMV so that the choice we make without thinking is the one that may allow us to save someone’s life in the future.

In my own research group, we lump these kinds of interventions under the umbrella of passive decision support because they don’t require a lot of effort on the part of a decision-maker. Indeed, these approaches are about exploiting – not correcting – the judgmental biases that people bring with them to all manner of decisions, large and small.

Since the publication of “Nudge,” there has been a proliferation of interest in bringing choice architecture into the policy mainstream. Even institutions like the World Bank and the Organization of Economic Cooperation and Development are rolling out their own nudge units. And, you shouldn’t be surprised to learn that the private sector has jumped on the increasingly crowded bandwagon of for-profit nudging.

We’ve successfully tested nudges for water conservation and sustainable food choice. Others have applied nudges to an even broader range of contexts. There’s no denying that choice architecture can work like gangbusters, which explains the widespread interest.

Sometimes a nudge isn’t enough

Nudges work for a wide array of choices, from ones we face every day to those that we face infrequently. Likewise, nudges are particularly well-suited to decisions that are complex with lots of different alternatives to choose from. And, they are advocated in situations where the outcomes of our decisions are delayed far enough into the future that they feel uncertain or abstract. This describes many of the big decisions policy-makers face, so it makes sense to think the solution must be more nudge units.

But herein lies the rub. For every context where a nudge seems like a realistic option, there’s at least another context where the application of passive decision support would be either be impossible – or, worse, a mistake.

Take, for example, the question of energy transitions. These transitions are often characterized by the move from infrastructure based on fossil fuels to renewables to address all manner of risks, including those from climate change. These are decisions that society makes infrequently. They are complex. And, the outcomes – which are based on our ability to meet conflicting economic, social and environmental objectives – will be delayed.

But, absent regulation that would place severe restrictions on the kinds of options we could choose from – and which, incidentally, would violate the freedom-of-choice tenet of choice architecture – there’s no way to put renewable infrastructure options at proverbial eye level for state or federal decision-makers, or their stakeholders.

Simply put, a nudge for a decision like this would be impossible. In these cases, decisions have to be made the old-fashioned way: with a heavy lift instead of a nudge.

Often, decisions are more complex

Complex policy decisions like this require what we call active decision support.

In these cases, specialists trained in the science of decision-making must work with people both to help them to overcome predictable biases and to approach decisions in a way that is different from how they might otherwise make them instinctively. To inform and structure these kinds of decisions, we – like choice architects – also look to insights from the behavioral sciences.

For example, we have a rich understanding of the decision-making shortcuts that people apply, as well as of the predictable biases that accompany them. So, we know what to be on the lookout for when we help individuals and groups make better decisions.

When evaluating problems that unfold over long periods of time, we know that people tend not to look at cumulative effects, or consider how choices made today may restrict the choices that can be made in the future.

Likewise, we see that decision-makers struggle with questions about how to put boundaries around the problem before them. For example, who really counts as a legitimate stakeholder, and who doesn’t? Likewise, are there hard deadlines or financial ceilings that must be obeyed? Or are these really soft constraints that can be challenged if the right option can be identified?

We’ve also learned that decision-makers often fail to adequately account for the broad range of objectives that ought to guide their decisions, as well as the performance measures that let them know if they’ve achieved them. And, we know that the manner in which people search for alternatives is often incremental at best. People look to obvious and easy-to-find options, the tendency that nudges exploit, at the expense of the creativity that’s required to address the really complex challenges.


The AI Governance Challenge

Perhaps worst of all, we observe that people avoid the necessary trade-offs when a choice can’t simultaneously achieve all of the objectives that they deem to be important. It’s often the case that the objectives that push emotional hot buttons, like fear, are the ones we pay the most attention to when trade-offs are difficult or uncomfortable, even if these objectives play a relatively small role in terms of advancing our overall well-being.

Active decision support helps decision-makers to overcome all of these obstacles, as well as others.

Unlike nudging, the intent of active decision support isn’t to direct people toward a specific course of action. It is to structure the decision-making process so that resulting choices are defensible – in other words, in line with our prioritized objectives. For big policies, this includes the deliberate balancing act between social, economic and environmental well-being.

The good news for policy-makers is that a wide range of tools and approaches are available which may help them make more defensible decisions.

Active decision support approaches work by breaking complex decisions into more cognitively manageable parts. And they are desperately needed. The wicked problems faced by society can’t be nudged away. Emergencies like the humanitarian crisis in Syria and the slow violence of climate change cry out for active decision support.

Yet, as governments amass nudge units, and as the private sector adopts a behavioral mindset in their marketing and public relations offices, the need for behavioral insights that support complex decisions goes unmet. Why? Perhaps because active decision support is often seen as something smart, educated people in the pubic and private sectors should be able to do intuitively, on their own. But, the simple truth is, they can’t. And, without investing in building the internal capacity for active decision support, they won’t.

Will A Tax On Disposable Bags Curb Their Use?

This article originally appeared in [https://theconversation.com/paper-or-plastic-how-disposable-bag-bans-fees-and-taxes-affect-consumer-behavior-48858] and belongs to the creators.

Last month, England became the latest government – and last among members of the UK – to pass a policy to combat the recent rise in the use of disposable plastic shopping bags, in its case a five-pence charge for each one.

While English newspapers warned that the new policy would create chaos, England is by no means the first to consider such a controversial policy. Several countries across the world and local governments have taken steps to address the environmental consequences of increased plastic bag use through regulation.

These regulations contain subtle but important design differences across different regions. Some banned their use; others just taxed them. In a few cases, companies began offering customers small bonuses for bringing their own bags.

Given the growing popularity of disposable bag regulations, this begs the question:

Have any successfully changed consumer behavior? And are all policies created equal?

Impetus behind the bans

It’s obvious why cities, regions and countries want to reduce the use of disposable bags.

Americans, for example, go through 100 billion single-use plastic bags every year – or 325 per person – that end up in landfills, streams and lakes, where they take 10 to 20 years to degrade. The bags cost retail stores about three cents a piece, and since it’s normally incorporated into the price of everything else, consumers don’t see it and thus have no incentive to reduce their use.

Plastic bags made up almost half of the trash in Washington, DC’s tributaries, according to a 2008 study, while a look at the budgets of six major cities showed that they spent 3.2 to 7.9 cents per bag on litter control, which suggests total spending across the US could tally US$3.2 billion to $7.9 billion a year.

And even when the bags are recycled, they present problems by clogging up the machines.

Varying approaches

Bangladesh became the first country to regulate disposable bag use when the government banned single-use plastic bags in 2002. Shortly after, Ireland implemented an alternative regulation, a €0.17 tax per plastic bag (later raised to €0.33) called the “Plastax.”

The varying approaches don’t end there. Like England, China and South Africa do not levy a tax on disposable bags, but require that store owners charge a fee for bag use. Rwanda banned plastic bags in 2008, and agents at the airport not only confiscate any they find but also cut the plastic wrapping off of suitcases.

Similar variation exists across the US. In 2010, Washington, DC became the first American city to charge customers for the use of disposable bags when the City Council passed a five-cent tax on both paper and plastic bags. And last year, California became the first state to pass such legislation, which coupled a tax on paper bag with a ban on plastic, but that policy won’t take effect until voters approve it in a referendum next year.

Government regulations are not the only policies aimed at curbing disposable bag use. Several grocery store chains offer their own incentives to curb disposable bag use, such as financial rewards for customers who bring their own bags. For example, Whole Foods rewards customers with a ten-cent bonus for each reusable bag.

Which ones work

In a recent study, I examined the relative effectiveness of two policies in the Washington, DC metropolitan area: a five-cent tax on paper and plastic disposable bags use and a five-cent bonus for reusable bag use.

If disposable and reusable bags are substitutes, the two policies are financially equivalent – each policy provides customers a five-cent incentive for using a reusable bag instead of a disposable bag. Standard economic theory tells us that individuals should have a similar response to the two types of incentives given that they are of the same monetary amount.

However, evidence from behavioral economics suggests that individuals are “loss averse,” meaning that they perceive losses more strongly than gains. If this is the case, then the tax may be more effective at changing behavior than a bonus.

My results showed just that. While 82% of customers used disposable bags prior to the tax, this fraction declined to 40% after the tax was implemented.

In contrast to the overwhelming impact of the tax, a five-cent bonus for reusable bag use had almost no impact on disposable bag use, evidence consistent with a model of loss aversion.

A related study found similar results after evaluating the impact of a policy in the San Francisco Bay Area.

The metro area that year imposed a ban on plastic bags in addition to a varying charge on paper bags. The study found that while the policy eliminated the use of plastic bags, it also generated an increase in the use of paper bags. This suggests that banning one type of disposable bag while leaving another type largely unregulated may lead to unintended consequences.

However, the effect of the policy on total disposable bag use (paper and plastic bags combined) was still quite effective – the proportion of customers using any type of disposable bag decreased by roughly 50%.


The AI Governance Challenge

A federal solution?

Should the United States consider taxing or banning disposable bags?

The results from the two studies above suggest that while a small tax on disposable bags has a substantial impact on bag use, roughly 40% of shoppers continue to use disposable bags anyway. If the policy goal is to eliminate disposable bag use altogether, these results suggest a need for a stricter regulation.

But in spite of environmental concerns, it is not obvious that the optimal policy is to reduce disposable bag consumption to zero, through a ban. The environmental costs may not always outweigh the benefits some shoppers receive from the convenience of getting a disposable bag at the store. If they are willing to pay a higher tax or fee in exchange for that convenience, it could offset the costs to the environment.

So while a ban or a larger tax may be successful at reducing disposable bag use even further, policymakers should carefully weigh the benefits of that reduction against the burden shoppers would face from the inconvenience or financial costs of the policy. In contrast, less restrictive policies, such as nominal fees for bag use, change the behavior of only those customers who are almost indifferent between using a disposable bag or not.

A tiny tax had an impressive impact on behavior, suggesting that a policy that focuses on consumers on the margin could still have a lot of bite. And maybe that is a good place for policymakers to start.

Nudges: Social Engineering or Sensible Policy?

In a world that overloads us with information and temptation, it’s nice to get some help that steers us towards making better choices. That’s the idea behind nudges – a broad term for behavioral science techniques that aim to influence how we make decisions.

The underlying philosophy behind nudges is libertarian paternalism: people should be free to make the decisions they want, but policymakers can present these choices in ways that lead to desired outcomes. For instance, employees contribute significantly more to their savings plans when they are automatically enrolled in a 401(k) program compared to when they need to opt-in. These policies help individuals save more for retirement, a generally desired goal that many may not achieve otherwise.

Should we be concerned about the use of nudges?

Yet as noted by Cass Sunstein, one of the most prominent proponents of nudging, “there can be a thin line between a self-control problem and a legitimate focus on short-term pleasure.” Despite their cost-effectiveness and wide applicability, legitimate concerns remain about if, when, and how we should nudge.

Should policymakers be influencing how people make these kinds of decisions? Even though nudges don’t mandate behaviors like laws, are nudges manipulating people into decisions they would not endorse upon reflection?

Our answers to these types of questions depend on many factors. In particular, recent research from social psychology suggests that how nudges are presented, and who presents them, strongly impacts our perceptions of these policies.

Choice Architecture – Defaults Matter

Imagine you’re moving into a new apartment with basic amenities and are given an opportunity to purchase premium upgrades. Now, imagine that you’re moving into a new apartment with already upgraded amenities and are given the chance to opt-out of the upgrades for the basic package.

These two scenarios offer exactly the same decision outcomes, but they differ in the default option (basic vs. premium amenities). The way that choices are framed for decision-makers is called choice architecture, and changing the choice architecture of a given scenario can significantly impact the decisions people make.

For instance, research from Mary Steffel and her colleagues found that participants in the opt-out (premium default) condition kept more premium amenities than those in the opt-in (basic default) condition choose to purchase. Similar results emerged when environementally-friendly, “green” amenities were substituted for premium options.

Transparency and Motives of Choice Architecture 

Interestingly, unlike many studies on default effects, Steffel also disclosed why the apartment options had been presented to the participants in a specific way.

The “green” amenities were framed as benefitting society while the premium amenities were framed as benefitting the apartment complex owner. Participants were told about how the set defaults may influence their decision-making as well. Each participant then rated the opt-in or opt-out scenario they were given on how ethical they thought it was.

The opt-out was perceived as less ethical than the opt-in for the premium amenities, but participants thought the opt-out and opt-in were equally ethical for the “green” packages. These results suggest that the perceived motives of the choice architect play a significant role in whether people approve of nudges. Perhaps unsurprisingly, nudges benefitting society are seen as more ethical than nudges benefitting a single business or person.

Nevertheless, nudges perceived as unethical are still effective. In the aforementioned study, the nudge still prompted people to buy more premium items even when it was disclosed and perceived to be unethical. Given the subtle power of even transparent nudges, businesses and policymakers will need to be trusted – and held accountable –  to use these behavioral tools ethically.

Thus, those advocating for the wider use of nudges must overcome citizens’ current lack of trust in institutions and the divisive polarization of partisan politics. The ethical use of these techniques will often require bipartisan agreement on desired outcomes, a state of affairs that rarely exists and is even psychologically avoided.

The Inconsistent Politics of Nudges 

Established in 2010 under conservative Prime Minister David Cameron, the Behavioral Insights Team was the first “nudge unit” developed to utilize behavioral science techniques in government policy. The Social and Behavioral Sciences Team established under President Obama followed suit in 2015 and has also produced many successful interventions over the last few years.

It’s notable that a UK conservative and a US liberal championed these teams. Yet principled critiques of these nudge units have come from across the political spectrum.

Why are there such political discrepancies in support for nudges? Researcher David Tannenbaum and his colleagues suggest that partisanship itself is the problem.

Partisan Nudge Bias

To examine these partisan biases, Tannenbaum showed participants legislation that proposed automatic enrolment defaults to encourage retirement savings behavior. This provision, part of the 2006 Pension Protection Act (PPA), was presented as an example of a behavioral tool that could be implemented “across a wide range of policies beyond the illustration above.” This provision was approved by both Presidents George W. Bush and Barack Obama, but participants were told that only one of the presidents, or a vague description of “lawmakers,” enacted the policy.

When asked how they felt about nudges as a general policy tool, liberals and conservatives generally approved of nudges in the “lawmakers” approved condition. Predictably, partisans’ feelings towards nudges as a tool – not just their feelings for the specific policy – shifted as a function of the President they thought approved the legislation. Liberals liked nudges more when Obama approved the PPA and disliked them when Bush approved it. Conservatives displayed the opposite effect.

In subsequent studies, Tannenbaum demonstrated that these partisan nudge biases are not limited to regular citizens. State and local bureaucrats, and even a group of mayors, exhibited similar biased evaluations. Policymakers liked automatic enrolment defaults significantly more when they were illustrated by an example that aligned with their politics. Even for professional policymakers, much of the opposition to nudges seemingly takes the form of partisan antipathy.


The AI Governance Challenge

A Tool Like Any Other

In our highly polarized times, it can be difficult to distinguish social engineering from sensible policy. But like other powerful tools, nudges are not inherently ethical or unethical. The fairness, effectiveness, and desirability of a nudge should be evaluated on a case-by-case basis.

Choice architecture is inevitable. The way we frame decisions impacts what we choose whether we think about it or not. It’s better to critically evaluate how we craft choice architecture than to ignorantly pretend such factors don’t influence us at all.

Focusing on the outcomes of specific policies, rather than on who offers them, may lead to more agreement on when and how nudges should be utilized. However, getting people to concentrate on those kinds of facts may require a nudge of its own.