Increasing The Pull Of The Future Self

Imagine the following two situations:

Steve’s doctor tells him his cholesterol is a bit high, and that in order to avoid the risk of a heart attack, he should reduce his consumption of fried foods. However, shortly afterwards, Steve and his friends go to a sports bar, and a big order of Steve’s favorite greasy onion rings is placed on the table. Even though he remembers the words his doctor said, Steve wants to dig in.

Kate goes to Vegas for spring break. Before the trip, she carefully calculates how much money she can allot to the trip without running into financial difficulties later in the year. However, once she gets pulled into the excitement of the slot machines, she feels compelled to keep gambling even though it would put her over her budget.

Intertemporal Tradeoffs and Discounting

Steve and Kate are both faced with intertemporal tradeoffs between present and future benefits, a phenomenon of broad research interest throughout the fields of psychology, behavioral economics, and marketing. Making decisions involving these tradeoffs seems to be notoriously difficult for most people, who often choose the immediately rewarding option, only to regret their decision later. This widespread tendency to underweight the impact of future outcomes compared to present ones is referred to as temporal discounting (see Frederick, Loewenstein, & O’Donoghue, 2002), a phenomenon that can sometimes lead to serious costs.

Why do people discount future outcomes?

A variety of psychological explanations has been offered as to why people might excessively discount the impact of future outcomes. One explanation is that some people simply don’t think much about the future at all when making day to day choices (e.g., see Simons, Vansteenkiste, Lens, & Lacante, 2004). In other cases, people are explicitly aware of future consequences, yet disregard them, either because they incorrectly estimate the future emotional impact of their actions (e.g., “I’ll be totally happy living on a stricter budget and eating ramen noodles for days on end”, see Gilbert & Wilson, 2007), or simply cannot resist the pull of the immediate reward (e.g., “those onion rings smell really good”, see Loewenstein, 1996).

The Future Self

Another explanation that has been gaining increasing attention involves people’s perceptions of their own future self. The philosopher Derek Parfit (1984) suggested that our degree of future concern should differ based on the degree of psychological overlap between the present and future self. If the future self is psychologically similar to who we are now, we should be concerned about its fate much as we are about our current well-being. In contrast, if there is little similarity between who we are now and who we expect to be, we may think about the future self as if it were another person entirely, favoring the current self when presented with a tradeoff.

Empirical research suggests that people do, at least in some cases, seem to think about the future self as they would a third party. For example, people are more likely to view themselves from a third person viewpoint when imagining a future scene, but a first person viewpoint when imagining a current scene (Pronin & Ross, 2006). People are also just as likely to assign unpleasant tasks to their future selves as they are to another person, but less likely to accept current responsibility for these tasks (Pronin, Olivola, & Kennedy, 2008).

Interventions to increase concern for the future self

Increasing the degree to which people make future-oriented decisions is often desirable, and could help address many practical problems such as reducing obesity, increasing savings rates, and curbing procrastination. A review of the literature suggests several types of possible interventions, some of which have been successfully implemented.

Intervention 1: Vividly imagine future self

Imagining one’s future self in vivid, concrete terms can help address short-sighted decisions that are due to lack of attention to the future self. For example, we can boost people’s future imagination by showing them an avatar representing an age-progressed rendition of themselves (Hershfield et al., 2011). In one study, people chose to save more money when current actions were directly linked to future outcomes by making the expression of the aged avatar (from frowning to smiling) contingent on the amount saved (Hershfield et al., 2011).

Intervention 2: Emphasize similarity between present and future self

Building on correlational findings that those who perceive more personal stability over time tend to behave in a more future-oriented fashion (e.g., increased saving; Ersner-Hershfield, Garton, Ballard, Samanez-Larkin, & Knutson, 2009), research suggests that manipulating people’s thoughts about personal change can serve as an intervention to change behavior. For example, Bartels & Urminsky (2011) found that college seniors were more likely to elect to delay a monetary reward (resulting in greater total compensation) when they were told that they would remain pretty much the same person after graduation than when graduation was described as fundamentally life-altering event.

Intervention 3: Capitalize on positive views

Recent findings suggest that even in cases where the future self might be fundamentally different from the current self, future-oriented behaviors can be encouraged by emphasizing the positive qualities of the future self (Molouki & Bartels, 2016; Molouki, Bartels, & Hershfield, 2016). Notably, individuals who have low self-esteem may be more likely to benefit from a description of the future self that highlights positive differences between the present and future self, suggesting that the future self is someone that they would like and care about, rather than emphasizing continued identification with an undesirable current state.

Summary and Conclusion

I have described several psychological explanations for excessive discounting of future outcomes, with a particular emphasis on perceptions of the future self. In general, people are more future-oriented when they a) vividly envision the consequences of their present actions for the future self, b) feel a similarity between the future and present self, and/or c) acknowledge positive qualities of the future self.  Thus, presenting people with interventions to make them think about the future self in these ways can encourage them to behave in line with their long-term interests.

Evolution Of Decision Making (1/3): The Rational Revolution

When we make decisions, we make mistakes. We all know this from personal experience, of course. But just in case we didn’t, a seemingly unending stream of experimental evidence in recent years has documented the human penchant for error. This line of research—dubbed heuristics and biases, although you may be more familiar with its offshoot, behavioral economics—has become the dominant academic approach to understanding decisions. Its practitioners have had a major influence on business, government, and financial markets. Their books—Predictably Irrational; Thinking, Fast and Slow; and Nudge, to name three of the most important—have suffused popular culture.

So far, so good. This research has been enormously informative and valuable. Our world, and our understanding of decision making, would be much poorer without it.

Another way of thinking about decision making

It is not, however, the only useful way to think about making decisions. Even if you restrict your view to the academic discussion, there are three distinct schools of thought. Although heuristics and biases is currently dominant, for the past half century it has interacted with and sometimes battled with the other two, one of which has a formal name—decision analysis—and the other of which can perhaps best be characterized as demonstrating that we humans aren’t as dumb as we look.

Adherents of the three schools have engaged in fierce debates, and although things have settled down lately, major differences persist. This isn’t like David Lodge’s aphorism about academic politics being so vicious because the stakes are so small. Decision making is important, and decision scholars have had real influence.

This article series tells the story of where the different streams arose and how they have interacted, beginning with the explosion of interest in the field during and after World War II (for a longer view, see “A Brief History of Decision Making,” by Leigh Buchanan and Andrew O’Connell, HBR, January 2006). The goal is to make you a more informed consumer of decision advice—which just might make you a better decision maker.

The Rational Revolution

During World War II statisticians and others who knew their way around probabilities (mathematicians, physicists, economists) played an unprecedented and crucial role in the Allied effort. They used analytical means—known as operational research in the UK and operations research on this side of the Atlantic—to improve quality control in manufacturing, route ships more safely across the ocean, figure out how many pieces antiaircraft shells should break into when they exploded, and crack the Germans’ codes.

After the war hopes were high that this logical, statistical approach would transform other fields. One famous product of this ambition was the nuclear doctrine of mutual assured destruction. Another was decision analysis, which in its simplest form amounts to (1) formulating a problem, (2) listing the possible courses of action, and (3) systematically assessing each option. Historical precedents existed—Benjamin Franklin had written in the 1770s of using a “Moral or Prudential Algebra” to compare options and make choices. But by the 1950s there was tremendous interest in developing a standard approach to weighing options in an uncertain future.

The mathematician John von Neumann, who coined the term mutual assured destruction, helped jump-start research into decision making with his notion of “expected utility.” As outlined in the first chapter of his landmark 1944 bookTheory of Games and Economic Behavior,written with the economist Oskar Morgenstern, expected utility is what results from combining imagined events with probabilities. Multiply the likelihood of a result against the gains that would accrue, and you get a number, expected utility, to guide your decisions.

It’s seldom that simple, of course. Von Neumann built his analysis around the game of poker, in which potential gains are easily quantifiable. In lots of life decisions, it’s much harder. And then there are the probabilities: If you’re uncertain, how are you supposed to know what those are?

The winning answer was that there is no one right answer—everybody has to wager a guess—but there is one correct way to revise probabilities as new information comes in. That is what has become known as Bayesian statistics, a revival and advancement of long-dormant ideas (most of them the work not of the English reverend Thomas Bayes but of the French mathematical genius Pierre-Simon Laplace) by a succession of scholars starting in the 1930s. For the purposes of storytelling simplicity I’ll mention just one: Leonard Jimmie Savage, a statistics professor whose 1954 book The Foundations of Statistics laid out the rules for changing one’s probability beliefs in the face of new information.

Products of the rational revolution

One early and still-influential product of this way of thinking is the theory of portfolio selection, outlined in 1952 by Savage’s University of Chicago student Harry Markowitz, which advised stock pickers to estimate both the expected return on a stock and the likelihood that their estimate was wrong. Markowitz won a Nobel prize for this in 1990.

The broader field of decision analysis began to come together in 1957, when the mathematician Howard Raiffa arrived at Harvard with a joint appointment in the Business School and the department of statistics. He soon found himself teaching a statistics course for business students with Robert Schlaifer, a classics scholar and fast learner who in the postwar years taught pretty much whatever needed teaching at HBS. The two concluded that the standard statistics fare of regressions and P values wasn’t all that useful to future business leaders, so they adopted a Bayesian approach. Before long what they were teaching was more decision making than statistics. Raiffa’s decision trees, with which students calculated the expected value of the different paths available to them, became a staple at HBS and the other business schools that emulated this approach.

The actual term “decision analysis,” though, was coined by Ronald Howard, an MIT electrical engineer and an expert in statistical processes who had studied with some of the leading figures in wartime operations research at MIT and crossed paths with Raiffa in Cambridge. While visiting Stanford for the 1964–1965 academic year, Howard was asked to apply the new decision-making theories to a nuclear power plant being contemplated at General Electric’s nuclear headquarters, then located in San Jose. He combined expected utility and Bayesian statistics with computer modeling and engineering techniques into what he dubbed decision analysis and some of his followers call West Coast decision analysis, to distinguish it from Raiffa’s approach. Howard and Raiffa were honored as the two founding fathers of the field at its 50th-anniversary celebration last year.

Liked this? Read part 2 (Irrationality’s Revenge).

This article originally appeared in [https://hbr.org/2015/05/from-economic-man-to-behavioral-economics] and belongs to the creators.

How Decision Science Could Have Changed World History

Better insight into human behavior and decision science by a county government official might have changed the course of world history.

The 2000 presidential election

Late in the evening of November 7, 2000, as projections from the U.S. presidential election rolled in, it became apparent that the outcome would turn on which candidate carried Florida. The state initially was called by several news outlets for Vice President Al Gore, on the basis of exit polls. But in a stunning development, that call was flipped in favor of Texas Governor George W. Bush as the actual ballots were tallied.1 The count proceeded through the early morning hours, resulting in a narrow margin of a few hundred votes for Bush that triggered an automatic machine recount. In the days that followed, intense attention focused on votes disallowed due to “hanging chads” on ballots that had not been properly punched. Weeks later, the U.S. Supreme Court halted a battle over the manual recount in a dramatic 5–4 decision. Bush would be certified the victor in Florida, and thus president-elect, by a mere 537 votes.

The “butterfly ballot”

Less attention was paid to a news item that emerged right after the election: A number of voters in Palm Beach County claimed that they might have mistakenly voted for conservative commentator Pat Buchanan when they had intended to vote for Gore. The format of the ballot, they said, had confused them. The Palm Beach County ballot was designed by Theresa LePore, the supervisor of elections, who was a registered Democrat. On the Palm Beach County “butterfly ballot,” candidate names appeared on facing pages, like butterfly wings, and votes were punched along a line between the pages (see Figure 1). LePore favored this format because it allowed for a larger print size that would be more readable to the county’s large proportion of elderly voters.2

Figure 1. Palm Beach County’s 2000 butterfly ballot for U.S. president

The mistake

Ms. LePore unwittingly neglected an important behavioral principle long known to experimental psychologists: To minimize effort and mistakes, the response required (in this case, punching a hole in the center line) must be compatible with people’s perception of the relevant stimulus (in this case, the ballot layout).3,4 To illustrate this principle, consider a stove in which burners are aligned in a square but the burner controls are aligned in a straight line (see Figure 2, left panel). Most people have difficulty selecting the intended controls, and they make occasional errors. In contrast, if the controls are laid out in a square that mirrors the alignment of burners (see Figure 2, right panel), people tend to make fewer errors. In this case, the stimulus (the burner one wishes to light) better matches the response (the knob requiring turning).

Figure 2. Differences in compatibility between stove burners and controls

Confused voters may have cost Al Gore the presidency

A close inspection of the butterfly ballot reveals an obvious incompatibility. Because Americans read left to right, many people would have perceived Gore as the second candidate on the ballot. But punching the second hole (No. 4) registered a vote for Buchanan. Meanwhile, because George Bush’s name was listed at the top of the ballot and a vote for him required punching the top hole, no such incompatibility was in play, so no related errors should have occurred. Indeed, a careful analysis of the Florida vote in the 2000 presidential election shows that Buchanan received a much higher vote count than would be predicted from the votes for other candidates using well-established statistical models. In fact, the “overvote” for Buchanan in Palm Beach County (presumably, by intended Gore voters) was estimated to be at least 2,000 votes, roughly four times the vote gap between Bush and Gore in the official tally.5 In short, had Ms. LePore been aware of the psychology of stimulus–­response compatibility, she presumably would have selected a less confusing ballot design. In that case, for better or worse, Al Gore would almost certainly have been elected America’s 43rd president.

Policy-making and the “rational agent” view

It is no surprise that a county-level government official made a policy decision without considering a well-established principle from experimental psychology. Policymaking, in both the public and the private sectors, has been dominated by a worldview from neoclassical economics that assumes people and organizations maximize their self-interest. Under this rational agent view, it is natural to take for granted that given full information, clear instructions, and an incentive to pay attention, mistakes should be rare; systematic mistakes are unthinkable. Perhaps more surprising is the fact that behavioral science research has not been routinely consulted by policymakers, despite the abundance of policy-relevant insights it provides.

Improving state of affairs

This state of affairs is improving. Interest in applied behavioral science has exploded in recent years, and the supply of applicable behavioral research has been increasing steadily. Unfortunately, most of this research fails to reach policymakers and practitioners in a useable format, and when behavioral insights do reach policymakers, it can be difficult for these professionals to assess the credibility of the research and act on it. In short, a stubborn gap persists between rigorous science and practical application.

This article originally appeared in [https://behavioralpolicy.org/article/bridging-the-divide/] and belongs to the creators.

Decision-Making Parallels Between Humans And Animals

Drawing parallels between human and animal decision-making

People are always choosing between many complex choices throughout their daily lives. Which bank should I open an account under? Should I walk or drive to work? Do I want to cook dinner or just order a pizza? To make these decisions, people often balance the advantages and disadvantages for each choice. This can be done explicitly, by making an actual list of pros and cons, or implicitly, when a person ‘follows their gut’ and chooses without thinking.

However, humans are not the only animals that make decisions. From the proud lion prowling the African savannah to the graceful house cat of a New York apartment, all animals must make decisions throughout their lives. So how do other animals make choices, and is their decision-making process anything like that of humans? Over the past 50 years, behavior analysts have been studying animals in the lab to try and answer these very questions.

Examining how animals make decisions

By studying the behavior of non-human animals in operant chambers, or ‘Skinner boxes’ as they’re colloquially called, scientists have been able to carefully examine the ways that animals make decisions.

It goes like this: the animal (often a rat or a pigeon) is put into the chamber and given a choice between two alternatives. Over time, the animal learns that choosing alternative A gives them X amount of food and alternative B gives them Y amount of food. By changing the amount of food each alternative gives the animal, the scientist is able to alter the choices of the animal. From the animal’s perspective, it seems like they are learning to weigh the advantages and disadvantages of each choice. For example, alternative A may give twice as much food as alternative B, but the quality of the food may be worse (cheap buffet vs fine dining).

The “matching law” in animal decision-making

By using the method described above, behavior analysts have studied decision-making in such a careful manner that they have even been able to develop a simple mathematical equation that can predict the choices of animals almost perfectly! The equation is called the “matching law” because their decisions have been found to match the combined advantages and disadvantages of their choices.

This equation is able to account for all of the different ways scientists have come up with for changing the qualities of the two choices (and scientists can get pretty creative when it comes to their research). This includes amount of food, quality of food, delay to food, and much more.

Can we extend the “matching law” to human decision-making?

While the matching law has been shown to predict the decisions of animals, it’s reasonable to question whether it is also able to account for the choices people make every day. Further research, both experimental and archival, has found that the matching law does, in fact, describe human decision-making accurately!

Research in this area is abundant, with examples as simple as choosing between varieties of snacks to more complex decision-making in sports. Children tend to work on class assignments harder and longer when they receive preferred treats as rewards. College students will pay more attention during class discussions when conversing with someone who makes frequent statements of agreement with them.

In the realm of sports, the proportion of 2- vs 3-point shots attempted by college and NBA basketball players matches the success rates of those shots. Additionally, college football coaches call for rushing and passing plays according to the average yards gained by those calls. In those instances, and many others, the matching law was able to accurately describe the decisions of people.

Final thoughts on decision-making parallels between humans and animals

Humans and animals are continually making choices throughout their lives, and these choices are often made in chaotic and dynamic environments. However, behavior analytic research can take place in a tightly-controlled laboratory environment with animals making relatively simpler decisions. Even with the potential limitations of animal research, studies have shown that the decision-making process of animals is very similar to that of humans. Over time, all animals (humans and non-humans alike) weigh the advantages and disadvantages of their choices and behave accordingly.  While human social situations are definitely more complicated, scientists will pursue more research on this important topic and continue to find similarities between human and animal decision-making.

The Impact Of FREE On Consumer Decision-Making

The Power of FREE

Imagine yourself walking into the bakery section of a supermarket. It is 4 pm and you are looking for an afternoon snack. You smell freshly baked cookies. You approach to the cookie stand and see the delicious double-chocolate chip cookies. Now consider three scenarios. Scenario #1, you see the tag that reads $2 per cookie. How many cookies would you get? Now, for scenario #2, instead imagine you are in the same situation but the tag now reads 1 cent per cookie. How many cookies would you get now? Finally, imagine that in scenario #3, that the tag now reads free. How many cookies would you get in this case? 

Economic theory predicts you will get more and more cookies as the price of cookies decreases. But, is there something special about the price of zero? This is the question that Dan Ariely and coauthors explore with a simple experiment. They set up a candy stand at the MIT student center. Some afternoons, they offer candies for 1 cent each. Other afternoons, they offer the candies for free. Then, they measure how many students stopped by the candy stand and how many candies each student took.

When candies are offered for 1 cent, 58 students stopped by. When candies are free, 207 students stopped by. As expected, offering candies for free increased the number of students dropping by. But here comes the trick. When candies are offered for 1 cent, students, on average, took 3.5 candies. When candies are free, they took 1.1 candies. Increasing the price from zero to 1 cent triples the number of candies students ended up getting. Weird, huh?

Looking at social norms vs market norms

The researchers explain this phenomenon by social norms. In this situation, being polite requires taking 1 and only 1 candy. When something, say a cookie, is offered for free, social norms are in place. You will think about the consequences of your action (taking too many cookies) on others. Simply,  if you take too many, there won’t be enough cookies left for others. Further, you will consider how others perceive you if you take too many cookies. Greedy, selfish, ignorant?

Contrast this to when a cookie is offered for a cent. Here, because of the simple exchange of money for a good, market norms are in place. You will feel like you had the catch of the day and you will end up buying many more cookies. You won’t even think about the social consequences of your excessive consumption.

Evaluating FREE in other situations

The power of free is not limited to above described phenomenon. People also tend to value free products more than they would otherwise. Imagine you are heading to work and a line in front of Starbucks gets your attention. Then you see the sign that says FREE tall regular coffee. How much would you be willing to wait on the line? You might be willing to wait 15 minutes for a free Starbucks coffee which would normally cost you $3. And this might be true even if your hourly pay rate is $60 and you are wasting your precious time on the line. To test whether people overvalue free products, Ariely and coauthors designed yet another experiment.

In one of the MIT cafeterias, they place two bins of chocolates (Hershey’s and Lindt Truffles) right next to the cashiers. They put a large sign that reads one chocolate per person and post the prices on the chocolate boxes. They vary the prices to test whether people overvalue the free item. In one case, Hershey’s was 1 cent and Lindt was 14 cents. In the other case, Hershey’s was 0 cents (i.e. free) and Lindt was 13 cents. What do you expect to happen to the number of Hershey’s and Lindt sold when the prices dropped by 1 cent?

Under standard economic theory, people who buy Hershey’s in the first case will continue to buy Hershey’s. People who buy Lindt in the first case will continue to buy Lindt. And some people who don’t buy any chocolate in the previous case will start buying some chocolate when the prices drop. Hence, the number of people who buy Hershey’s or Lindt will increase. But, the results of this experiment paint a quite different picture. In the first case, 8% of the customers bought Hershey’s and 30% of the customers bought Lindt. When prices dropped by 1 cent, 31% of the customers bought Hershey’s whereas 13% of the customers bought Lindt. A huge change in the demand for such a small change in the price.

screen-shot-2016-09-08-at-8-54-21-pm

(Source: Shampanier, Mazar and Ariely (2007))

Why do we overvalue FREE?

With this experiment, the researchers show that people overvalue the free products. They behave as if zero price meant not only a drop in the cost of buying the product, but also an increase in the value of it. As a Turkish and Russian proverb says “Vinegar that is free, is sweeter than honey”. But why do we see such behavior? The researchers argue this is due to what psychologists called Affect.

Affect has two components in this setting: 1. Free products make people happier and 2. Happiness affects people’s decision making process. With two follow-up experiments, they support this argument. In the first experiment, they ask subjects how attractive a piece of Hershey’s is when it is free and when it is 1 cent as well as how attractive a piece of Lindt is when it is 13 cents and 14 cents. Subjects rate chocolate-price pairs using one of the five smilies ranging from very unhappy to very happy.  Free Hershey’s got the highest score by far, supporting free products create happiness hypothesis. In the second experiment, the researchers try to mute this affective response by making subjects carefully analyze the options. When subjects were forced to think deliberately, the zero-price effect indeed went away.

So from now on, whenever something is offered for free, take a step back and consider the true cost of FREE.

To Be Right or Liked? Evaluating Political Decision-Making

In the era of Twitter mobs and polarizing pundits, it seems like we care a lot about figuring out the truth and expressing it – resoundingly. Ideological battles are constantly being waged in the halls of Congress, on our news channels, and in our Facebook feeds. We can share our worldview like never before, yet we often feel worlds apart when assessing our shared reality.

But if we care so much about being right, why do we argue so much when facts about contentious topics are readily available? If the data are gathered, shouldn’t we all be reaching the same conclusions?

Recognizing biases in how we attend to information

Unfortunately, we don’t care about the truth as much as we typically think. A large body of evidence suggests people will attend to political information in incredibly biased ways. Our highly social brains make discerning truth more difficult than we might hope because we often protect our previous beliefs rather than face inconvenient truths.

For instance, Kahan et al. (2013) tasked a nationally representative set of participants with a difficult numeracy problem. Quickly looking at the data could easily lead participants to the incorrect interpretation, for the intuitive answer was designed to be incorrect. Reaching the correct answer required participants think carefully about the data. Interestingly, people were less accurate in interpreting the same data when they believed the information came from a study on gun control than when it was about the supposed efficacy of a new skin cream.

Picture1

(From Kahan et al., 2013)

In the skin cream conditions, accuracy was best predicted by participants’ previously established quantitative abilities: those with better numeracy skills were more likely to interpret the results correctly. Yet for those in the gun control conditions, interpretive accuracy was significantly predicted by whether or not the data affirmed participants’ previous beliefs. Conservatives were more accurate when the correct interpretation suggested that banning concealed guns increased crime, and liberals were more accurate when the correct interpretation suggested that banning concealed guns decreased crime.

Heuristic Thinking or Motivated Reasoning? 

One popular theory for interpreting these and similar findings proposes that people rely too heavily on automatic, heuristic, System 1 thinking (Sunstein, 2005). Reasoning through difficult problems is hard and costly, so we often engage our intuitions and emotions to quickly guide us to the right decisions. Problems with this mode of thinking can manifest when people vote along party lines: we believe our party usually represents our values, so we may not notice when a party contradicts our personal positions (Cohen, 2003).

While this theory explains part of the picture of biased thinking, it doesn’t adequately explain the results of the Kahan et al. (2013) study. The intuitive, heuristic answer was designed to lead participants astray, so why were partisans more accurate when the counterintuitive, correct answer affirmed their previous beliefs?

An alternative theory based on research in motivated reasoning can clarify these results. Motivated reasoning theorists propose that we often reason with desired conclusions in mind and selectively recruit our mental faculties towards reaching those conclusions. Studies in this area find that quantity of information processing matters as much as the quality of reasoning. When System 1 thinking produces the conclusion we want, we stop thinking and move on; but when System 1 thinking produces an answer that challenges our previous assumptions, we look again and think more critically (System 2 reasoning) to try to figure out the answer (Ditto, 2009).

Understanding Politically Motivated Reasoning

In support of this view, participants whose political assumptions were affirmed by the correct interpretation of the gun ban data were more likely to answer correctly than participants whose identities were affirmed by the incorrect heuristic assumption. This difference was 20% greater in those with the highest quantitative abilities, but no such differences emerged in the identity-neutral skin cream evaluations.

Picture2

(From Kahan et al., 2013)

Essentially, highly skilled participants were more capable of reaching the correct interpretation when motivated to do so, but like everyone else they lacked the motivation to reason accurately about a charged topic unless their political beliefs were being challenged.

If the people most capable of accurately interpreting data are at least as biased as the rest of us, how can we hope to find common ground on our most pressing and divisive issues?

Understanding motivations and incentives at play

To start, we need to understand the motivations and incentives at play.

Part of why these results emerge is that it is often rational for individuals to believe in incorrect party platitudes. Most of us have little influence on policymaking processes, but expressing our political beliefs serves a social and psychological function even when we cannot tangibly impact policy. Everyone wants to be right, but practically speaking it’s usually more important for us to be socially accepted and internally consistent in our beliefs.

Thus, the accuracy of our views often doesn’t matter as much as our ability to signal our moral and political positions to others. Like the participants in this study, we have comparatively little external motivation to be correct. These separate motivations rarely conflict in everyday life, but when they do we often favor protecting our established identities and assumptions over acknowledging difficult truths.

Realigning Our Motivations

With this in mind, researchers have found that affirming individuals in non-political domains can increase participants’ acceptance of identity-inconsistent political information (Cohen, Aronson & Steele, 2000; Cohen & Sherman, 2014). Individuals who reflect on their personal values in an apolitical exercise process political information in a more balanced way, focusing on the strength of alternative arguments in a less biased manner (Correll, Spencer, & Zanna, 2004).

Preliminary evidence also suggests that monetary incentives can increase the accuracy of partisans’ responses to uncongenial findings (Khana & Sood, working). The individual rationality of incorrect beliefs leads to political dysfunction at the collective level, but finding ways to incentivize interpretive accuracy might lead to institutional initiatives to reduce bias.

These promising findings, however, do not yield any obvious, practical policy remedies to our pervasive political biases. Citizens, social scientists, and policymakers will need to collaborate in order to test and implement solutions to get beyond the current state of political gridlock.

Nevertheless, it is important for citizens to recognize that our political affiliations influence – not merely represent – our political behaviors. Until we find systematic ways to improve our reasoning, we can all take steps towards improving political discourse by making efforts to challenge our personal beliefs and biases. Our values are more meaningful than their political utility, so if you stumble into a heated debate remember that the search for community, happiness, and truth transcends partisan boundaries.

Cognitive Computing: Software That Augments Human Thinking

Using technology to make decisions

The IBM computer Deep Blue’s 1997 defeat of world champion Garry Kasparov is one of the most famous events in chess history. But Kasparov himself and some computer scientists believe a more significant result occurred in 2005—and that it should guide how we use technology to make decisions and get work done.

In an unusual online tournament, two U.S. amateurs armed with three PCs snatched a $20,000 prize from a field of supercomputers and grandmasters. The victors’ technology and chess skills were plainly inferior. But they had devised a way of working that created a greater combined intelligence—one in which humans provided insight and intuition, and computers brute-force predictions.

Combining the best of humans and technology

Some companies are now designing software to foster just such man-machine combinations. One that owes its success to this approach is Palantir, a rapidly growing software company in Palo Alto, California, known for its close connections to intelligence agencies. Shyam Sankar, director of forward deployed engineering at the company, says Palantir’s founders became devotees while at PayPal, where they designed an automated system to flag fraudulent transactions. “It catches 80 percent of the fraud, the dumb fraud, but it’s not clever enough for the most sophisticated criminals,” says Sankar.

PayPal ended up creating software to enable humans to hunt for that toughest 20 percent themselves, in the form of a suite of analysis tools that allowed them to act on their own insights about suspicious activity in vast piles of data rather than wait for automated systems to discover it. Palantir, which received funding from the CIA, now sells similar data-analysis software to law enforcement, banks, and other industries.

Sankar describes Palantir’s goal as fostering “human-computer symbiosis,” a term adapted from J.C.R. Licklider, a psychologist and computer scientist who published a prescient essay on the topic in 1960. Sankar contrasts that with what he calls the “AI bias” now dominant in the tech industry. “We focus on helping humans investigate hypotheses,” says Sankar. That’s only possible if analysts have tools that let them creatively examine data from every angle in search of those “aha” moments.

Concrete uses of this software

In practice, Palantir’s software gives the user tools to explore interconnected data and tries to present the information visually, often as maps that track to how people think. One bank bought the software in order to detect rogue employees stealing or leaking sensitive information. The detective work was guided by when and where employees badged into buildings, and by records of their digital activities on the company’s network. “This is contrary to automated decision making, when an algorithm figures everything out based on past data,” says Ari Gesher, a Palantir engineer. “That works great. Except when the adversary is changing. And many classes of modern problems do have this adaptive adversary in the mix.”

Palantir’s devotion to human–computer symbiosis seems to be working. The nine-year-old company now has 1,200 employees and is expanding into new industries such as health care. Forbes estimated that it was on course for revenues of $450 million in 2013.

Zachary Lemnios, director of research strategy for IBM, is another Licklider fan. He says that Licklider’s ideas helped shape IBM’s effort in “cognitive computing,” a project that includes virtual assistant software and chips designed to operate like brains. “You will have an entirely different relationship with these machines,” says Lemnios. He says it’s the most important change to human–computer interaction since the graphical user interface was developed 25 years ago.

Sankar also thinks that Palantir’s success shows that large companies are ready to embrace human-computer symbiosis now because of the way people have struck up symbiotic relationships with smartphones in their personal lives. “The consumer experience has recalibrated enterprise at large; they’re on the hunt for something that replicates it,” he says.

This article originally appeared in [https://www.technologyreview.com/s/523666/software-that-augments-human-thinking/?utm_campaign=internal&utm_medium=readnext&utm_source=item_5] and belongs to the creators.

Teaching Self-Driving Cars To Make Ethical Decisions

A philosopher is perhaps the last person you’d expect to have a hand in designing your next car, but that’s exactly what one expert on self-driving vehicles has in mind.

Chris Gerdes, a professor at Stanford University, leads a research lab that is experimenting with sophisticated hardware and software for automated driving. But together with Patrick Lin, a professor of philosophy at Cal Poly, he is also exploring the ethical dilemmas that may arise when vehicle self-driving is deployed in the real world.

Ethical dilemmas arise from self-driving cars being used in the real world

Gerdes and Lin organized a workshop at Stanford earlier this year that brought together philosophers and engineers to discuss the issue. They implemented different ethical settings in the software that controls automated vehicles and then tested the code in simulations and even in real vehicles. Such settings might, for example, tell a car to prioritize avoiding humans over avoiding parked vehicles, or not to swerve for squirrels.

Illustration by Victor Kerlow

Fully self-driving vehicles are still at the research stage, but automated driving technology is rapidly creeping into vehicles. Over the next couple of years, a number of carmakers plan to release vehicles capable of steering, accelerating, and braking for themselves on highways for extended periods. Some cars already feature sensors that can detect pedestrians or cyclists, and warn drivers if it seems they might hit someone.

Looking at the past, present, and future

So far, self-driving cars have been involved in very few accidents. Google’s automated cars have covered nearly a million miles of road with just a few rear-enders, and these vehicles typically deal with uncertain situations by simply stopping (see “Google’s Self-Driving Car Chief Defends Safety Record”).

As the technology advances, however, and cars become capable of interpreting more complex scenes, automated driving systems may need to make split-second decisions that raise real ethical questions.

Self-driving cars and (a variation of) the trolley problem

At a recent industry event, Gerdes gave an example of one such scenario: a child suddenly dashing into the road, forcing the self-driving car to choose between hitting the child or swerving into an oncoming van.

“As we see this with human eyes, one of these obstacles has a lot more value than the other,” Gerdes said. “What is the car’s responsibility?”

Gerdes pointed out that it might even be ethically preferable to put the passengers of the self-driving car at risk. “If that would avoid the child, if it would save the child’s life, could we injure the occupant of the vehicle? These are very tough decisions that those that design control algorithms for automated vehicles face every day,” he said.

Gerdes called on researchers, automotive engineers, and automotive executives at the event to prepare to consider the ethical implications of the technology they are developing. “You’re not going to just go and get the ethics module, and plug it into your self-driving car,” he said.

Other experts agree that there will be an important ethical dimension to the development of automated driving technology.

“When you ask a car to make a decision, you have an ethical dilemma,” says Adriano Alessandrini, a researcher working on automated vehicles at the University de Roma La Sapienza, in Italy. “You might see something in your path, and you decide to change lanes, and as you do, something else is in that lane. So this is an ethical dilemma.”

Alessandrini leads a project called CityMobil2, which is testing automated transit vehicles in various Italian cities. These vehicles are far simpler than the cars being developed by Google and many carmakers; they simply follow a route and brake if something gets in the way. Alessandrini believes this may make the technology easier to launch. “We don’t have this [ethical] problem,” he says.

Self-driving cars: problem or solution?

Others believe the situation is a little more complicated. For example,Bryant Walker-Smith, an assistant professor at the University of South Carolina who studies the legal and social implications of self-driving vehicles, says plenty of ethical decisions are already made in automotive engineering. “Ethics, philosophy, law: all of these assumptions underpin so many decisions,” he says. “If you look at airbags, for example, inherent in that technology is the assumption that you’re going to save a lot of lives, and only kill a few.”

Walker-Smith adds that, given the number of fatal traffic accidents that involve human error today, it could be considered unethical to introduce self-driving technology too slowly. “The biggest ethical question is how quickly we move. We have a technology that potentially could save a lot of people, but is going to be imperfect and is going to kill.”

This article originally appeared in [https://www.technologyreview.com/s/539731/how-to-help-self-driving-cars-make-ethical-decisions/] and belongs to the creators.

Does Emotion Affect Our Ability To Make Rational Decisions?

Decision making is one of the most common activities we perform on a daily basis, from how we roll our eyes to resolving complicated ethical dilemmas. It has also been one of the main focuses for traditional economics. Economists spend decades to understand how humans make decisions. The traditional thinking in the field suggests that decisions are made rationally and optimally. So in theory, outcomes should be predictable.

However, if that is the case, how can traditional economic theory on decision making explain confusion, indecisiveness and impulsivity?

With the emergence of behavioral economics, more people begin to understand the significance of bounded rationality in decision making. This means our brains do not function as computers, but instead our decisions can be affected by our emotions and cognitive abilities. In fact, neuroscientists have been seeking to understand the neural and behavioral basis of decision making for a very long time.

Studies have shown that when making decisions, the brain takes a variety of information into consideration including fairness, losses and gains, reward values, risks, etc.

Four levels of decision-making

There are many types of decisions for us to make, and different brain regions are involved in the process depending on the decision types. Hu and Wang (2012) categorized decisions into four levels according to their complexity.

  1. Intuitive – Representing the most basic level of decisions, these are the decisions we make in which we use familiarity, existing preferences and common sense to make decisions.
  2. Empirical – These are the types of decisions we make based on trial and error, experience and estimation.
  3. Heuristic – These are decisions we make based on scientific theories, rules of thumb, and beliefs.  
  4. Rational – The most complex level of decision making, this can be split into static and dynamic rational decisions. These are decisions we make based on minimizing cost while maximizing benefit.

Emotion and level two decision-making

Some of the most important decisions we make are the second level decisions. These decisions are usually related to consumption. They require us to evaluate the values or expected values of the available options. Neuroimaging research show that the orbitofrontal cortex and the ventromedial prefrontal cortex in the brain (O/VMPFC) are related to these kinds of decision making. O/VMPFC is known to be responsible for reward and enjoyment in the brain of humans and primates. They connect to our sensory system in two ways. One is a direct connection and the other is an indirect pathway where it goes through the amygdala. The amygdala in the brain performs as a major role in processing emotions, which suggests that emotions might be involved when we make level two decisions. Studies have shown that patients with the O/VMPFC lesion show lack of emotional expressions and difficulties in performing second level decision making (Hu and Wang, 2012).

Speculative hypothesis: Can inhibiting level two decision-making leads to more rational decisions?

This question was examined with the Ultimatum Game, where fairness is a major factor contributes to decision making. The Ultimatum Game is an economic experiment in which the first player receives a sum of money and decides how to split it up. This split is then offered to the second player, who has the choice to then accept or reject the offer. If the second player rejects the offer, then neither the player gets any money. A study done in 2006 tested whether a lesion to certain brain areas would affect a player’s ability to take an unfair, but still beneficial offer. In this experiment, the split was always unfair, where the first player kept more of the money than he offered to the second player. So for example, if $100 was allocated to the first player, they might split it up 80/20, offering the second player $20, or ⅕ of the total amount. Player 2, when offered an unfair split, often rejects the offer because it is perceived as being ‘unfair’.

The right dorsolateral prefrontal cortex (DLPFC) has shown to be related to decisions that reject the unfair offers. The study done in 2006 suggested that participants with lesion to this part of the brain lead to the acceptance of the unfair, but still beneficial offer. While any inference here is purely speculative, an interesting question to explore would be whether damage to this part of the brain, which is involved with level 2 decision-making, may lead to more rational decisions. In the Ultimatum Game, the rational decision is to accept the unfair offer since gaining some money is always better than nothing.

Extending this concept to primates

Not only can we as humans experience fairness, primates behave as if they understand the concept as well. In fact, animal models allow scientists to conduct more detailed research studies and to learn about the actual neuronal circuits involved in decision making. A research published in 2003 demonstrated that capuchins are able to detect unequal pay and decide to reject the inferior reward. After this study, more neuroscientists became interested in learning how “value” is encoded in the brain.  In 2005, Padoa-Schioppa and Assad published their study where they asked monkeys to choose between two unequal food rewards by looking left or right on a screen. By recording the monkeys’ eye movement, they found that some neurons in the orbitofrontal cortex (OFC) fire faster in response to a better reward which suggests these neurons encode economic value. It is also shown that OFC plays an important role in learning from unexpected outcomes and helping us alter our behaviors when decisions go wrong (Takahashi et al, 2009).

This just scratches the surface of what scientists have been able to learn about decision making throughout the years. There is a whole new interdisciplinary field emerging related to neuroeconomics. It truly amazes me how one of our most common and seemingly easy daily activities could be so difficult to explain and understand. Hopefully with the help of advanced technology we can solve this open mystery in the near future.