Using Behavioral Insights To Stay Motivated At Work

This article originally appeared in [] and belongs to the creators.

“When we think about how people work, the naïve intuition we have is that people are like rats in a maze,” says behavioral economist Dan Ariely (TED Talk:What makes us feel good about our work?) “We really have this incredibly simplistic view of why people work and what the labor market looks like.”

Instead, when you look carefully at the way people work, he says, you find out there’s a lot more at play — and at stake — than money. Ariely provides evidence that we are also driven by the meaningfulness of our work, by others’ acknowledgement — and by the amount of effort we’ve put in: the harder the task is, the prouder we are.

“When we think about labor, we usually think about motivation and payment as the same thing, but the reality is that we should probably add all kinds of things to it: meaning, creation, challenges, ownership, identity, pride, etc.,” Ariely says.

Below, take a look at some of Ariely’s other studies, as well as a few from other researchers, with interesting implications for what makes us feel good about our work.

Seeing the fruits of our labor may make us more productive

The Study: In Man’s search for meaning: The case of Legos, Ariely asked participants to build characters from Lego’s Bionicles series. In both conditions, participants were paid decreasing amounts for each subsequent Bionicle: $3 for the first one, $2.70 for the next one, and so on. But while one group’s creations were stored under the table, to be disassembled at the end of the experiment, the other group’s Bionicles were disassembled as soon as they’d been built. “This was an endless cycle of them building and we destroying in front of their eyes,” Ariely says.

The ResultsThe first group made 11 Bionicles, on average, while the second group made only seven before they quit.

The UpshotEven though there wasn’t huge meaning at stake, and even though the first group knew their work would be destroyed at the end of the experiment, seeing the results of their labor for even a short time was enough to dramatically improve performance.

The less appreciated we feel our work is, the more money we want to do it

The Study: Ariely gave study participants — students at MIT — a piece of paper filled with random letters, and asked them to find pairs of identical letters. Each round, they were offered less money than the previous round. People in the first group wrote their names on their sheets and handed them to the experimenter, who looked it over and said “Uh huh” before putting it in a pile. People in the second group didn’t write down their names, and the experimenter put their sheets in a pile without looking at them. People in the third group had their work shredded immediately upon completion.

The Results: People whose work was shredded needed twice as much money as those whose work was acknowledged in order to keep doing the task. People in the second group, whose work was saved but ignored, needed almost as much money as those whose work was shredded.

The Upshot: “Ignoring the performance of people is almost as bad as shredding their effort before their eyes,” Ariely says. “The good news is that adding motivation doesn’t seem to be so difficult. The bad news is that eliminating motivation seems to be incredibly easy, and if we don’t think about it carefully, we might overdo it.”

The harder a project is, the prouder we feel of it

The StudyIn another study, Ariely gave origami novices paper and instructions to build a (pretty ugly) form. Those who did the origami project, as well as bystanders, were asked at the end how much they’d pay for the product. In a second trial, Ariely hid the instructions from some participants, resulting in a harder process — and an uglier product.

The ResultsIn the first experiment, the builders paid five times as much as those who just evaluated the product. In the second experiment, the lack of instructions exaggerated this difference: builders valued the ugly-but-difficult products even more highly than the easier, prettier ones, while observers valued them even less.

The UpshotOur valuation of our own work is directly tied to the effort we’ve expended. (Plus, we erroneously think that other people will ascribe the same value to our own work as we do.)

Knowing that our work helps others may increase our unconscious motivation

The Study: As described in a recent New York Times Magazine profile, psychologist Adam Grant led a study at a University of Michigan fundraising call center in which students who had benefited from the center’s scholarship fundraising efforts spoke to the callers for 10 minutes.

The ResultsA month later, the callers were spending 142 percent more time on the phone than before, and revenues had increased by 171 percent, according to the Times. But the callers denied the scholarship students’ visit had impacted them.

The Upshot: “It was almost as if the good feelings had bypassed the callers’ conscious cognitive processes and gone straight to a more subconscious source of motivation,” the Times reports. “They were more driven to succeed, even if they could not pinpoint the trigger for that drive.”

The promise of helping others makes us more likely to follow rules

The StudyGrant ran another study (also described in the Times profile) in which he put up signs at a hospital’s hand-washing stations, reading either “Hand hygiene prevents you from catching diseases” or “Hand hygiene prevents patients from catching diseases.”

The ResultsDoctors and nurses used 45 percent more soap or hand sanitizer in the stations with signs that mentioned patients.

The UpshotHelping others through what’s called “prosocial behavior” motivates us.

Positive reinforcement about our abilities may increase performance

The StudyUndergraduates at Harvard University gave speeches and did mock interviews with experimenters who were either nodding and smiling or shaking their heads, furrowing their eyebrows, and crossing their arms.
The ResultsThe participants in the first group later answered a series of numerical questions more accurately than those in the second group.


The AI Governance Challenge

The UpshotStressful situations can be manageable — it all depends on how we feel. We find ourselves in a “challenge state” when we think we can handle the task (as the first group did); when we’re in a “threat state,” on the other hand, the difficulty of the task is overwhelming, and we become discouraged. We’re more motivated and perform better in a challenge state, when we have confidence in our abilities.

Images that trigger positive emotions may actually help us focus
The StudyResearchers at Hiroshima University had university students perform a dexterity task before and after looking at pictures of either baby or adult animals.

The ResultsPerformance improved in both cases, but more so (10 percent improvement!) when participants looked at cute pictures of puppies and kittens.

The UpshotThe researchers suggest that the “cuteness-triggered positive emotion” helps us narrow our focus, upping our performance on a task that requires close attention. Yes, this study may just validate your baby panda obsession.

Evolution Of Decision Making (2/3): Irrationality’s Revenge

This article series tells the story of where the different streams arose and how they have interacted, beginning with the explosion of interest in the field during and after World War II (for a longer view, see “A Brief History of Decision Making,” by Leigh Buchanan and Andrew O’Connell, HBR, January 2006). The goal is to make you a more informed consumer of decision advice—which just might make you a better decision maker.

This article originally appeared in [] and belongs to the creators.

Missed the first part of this series? Go back to part 1 (The Rational Revolution).

Irrationality’s Revenge

Almost as soon as von Neumann and Morgenstern outlined their theory of expected utility, economists began adopting it not just as a model of rational behavior but as a description of how people actually make decisions. “Economic man” was supposed to be a rational creature; since rationality now included assessing probabilities in a consistent way, economic man could be expected to do that, too. For those who found this a bit unrealistic, Savage and the economist Milton Friedman wrote in 1948, the proper analogy was to an expert billiards player who didn’t know the mathematical formulas governing how one ball would carom off another but “made his shots as if he knew the formulas.”

Somewhat amazingly, that’s where economists left things for more than 30 years. It wasn’t that they thought everybody made perfect probability calculations; they simply believed that in free markets, rational behavior would usually prevail.

The question of whether people actually make decisions in the ways outlined by von Neumann and Savage was thus left to the psychologists. Ward Edwards was the pioneer, learning about expected utility and Bayesian methods from his Harvard statistics professor and writing a seminal 1954 article titled “The Theory of Decision Making” for a psychology journal. This interest was not immediately embraced by his colleagues—Edwards was dismissed from his first job, at Johns Hopkins, for focusing too much on decision research. But after a stint at an Air Force personnel research center, he landed at the University of Michigan, a burgeoning center of mathematical psychology. Before long he lured Jimmie Savage to Ann Arbor and began designing experiments to measure how well people’s probability judgments followed Savage’s axioms.
A typical Edwards experiment went like this: Subjects were shown two bags of poker chips—one containing 700 red chips and 300 blue chips, and the other the opposite. Subjects took a few chips out of a random bag and then estimated the likelihood that they had the mostly blue bag or the mostly red one.

Say you got eight red chips and four blue ones. What’s the likelihood that you had the predominantly red bag? Most people gave an answer between 70% and 80%. According to Bayes’ Theorem, the likelihood is actually 97%. Still, the changes in subjects’ probability assessments were “orderly” and in the correct direction, so Edwards concluded in 1968 that people were “conservative information processors”—not perfectly rational according to the rules of decision analysis, but close enough for most purposes.

In 1969 Daniel Kahneman, of the Hebrew University of Jerusalem, invited a colleague who had studied with Edwards at the University of Michigan, Amos Tversky, to address his graduate seminar on the practical applications of psychological research. Tversky told the class about Edwards’s experiments and conclusions. Kahneman, who had not previously focused on decision research, thought Edwards was far too generous in his assessment of people’s information-processing skills, and before long he persuaded Tversky to undertake a joint research project. Starting with a quiz administered to their fellow mathematical psychologists at a conference, the pair conducted experiment after experiment showing that people assessed probabilities and made decisions in ways systematically different from what the decision analysts advised.


The AI Governance Challenge

“In making predictions and judgments under uncertainty, people do not appear to follow the calculus of chance or the statistical theory of prediction,” they wrote in 1973. “They rely on a limited number of heuristics which sometimes yield reasonable judgments and sometimes lead to severe and systematic errors.”

Heuristics are rules of thumb—decision-making shortcuts. Kahneman and Tversky didn’t think relying on them was always a bad idea, but they focused their work on heuristics that led people astray. Over the years they and their adherents assembled a long list of these decision-making flaws—the availability heuristic, the endowment effect, and so on.

As an academic movement, this was brilliantly successful. Kahneman and Tversky not only attracted a legion of followers in psychology but also inspired a young economist, Richard Thaler, and with help from him and others came to have a bigger impact on the field than any outsider since von Neumann. Kahneman won an economics Nobel in 2002—Tversky had died in 1996 and thus couldn’t share the prize—and the heuristics-and-biases insights relating to money became known as behavioral economics. The search for ways in which humans violate the rules of rationality remains a rich vein of research for scholars in multiple fields.

The implications for how to make better decisions, though, are less clear. First-generation decision analysts such as Howard Raiffa and Ward Edwards recognized the flaws described by Kahneman and Tversky as real but thought the focus on them was misplaced and led to a fatalistic view of man as a “cognitive cripple.” Even some heuristics-and-biases researchers agreed. “The bias story is so captivating that it overwhelmed the heuristics story,” says Baruch Fischhoff, a former research assistant of Kahneman and Tversky who has long taught at Carnegie Mellon University. “I often cringe when my work with Amos is credited with demonstrating that human choices are irrational,” Kahneman himself wrote inThinking, Fast and Slow. “In fact our research only showed that humans are not well described by the rational-agent model.” And so a new set of decision scholars began to examine whether those shortcuts our brains take are actually all that irrational.

When Heuristics Work

That notion wasn’t entirely new. Herbert Simon, originally a political scientist but later a sort of social scientist of all trades (the economists gave him a Nobel in 1978), had begun using the term “heuristic” in a positive sense in the 1950s. Decision makers seldom had the time or mental processing power to follow the optimization process outlined by the decision analysts, he argued, so they “satisficed” by taking shortcuts and going with the first satisfactory course of action rather than continuing to search for the best.

Simon’s “bounded rationality,” as he called it, is often depicted as a precursor to the work of Kahneman and Tversky, but it was different in intent. Whereas they showed how people departed from the rational model for making decisions, Simon disputed that the “rational” model was actually best. In the 1980s others began to join in the argument.

The most argumentative among them was and still is Gerd Gigerenzer, a German psychology professor who also did doctoral studies in statistics. In the early 1980s he spent a life-changing year at the Center for Interdisciplinary Research in the German city of Bielefeld, studying the rise of probability theory in the 17th through 19th centuries with a group of philosophers and historians. One result was a well-regarded history, The Empire of Chance, by Gigerenzer and five others (Gigerenzer’s name was listed first because in keeping with the book’s theme, the authors drew lots). Another was a growing conviction in Gigerenzer’s mind that the Bayesian approach to probability favored by the decision analysts was, although not incorrect, just one of several options.

When Gigerenzer began reading Kahneman and Tversky, he says now, he did so “with a different eye than most readers.” He was, first, dubious of some of the results. By tweaking the framing of a question, it is sometimes possible to make apparent cognitive illusions go away. Gigerenzer and several coauthors found, for example, that doctors and patients are far more likely to assess disease risks correctly when statistics are presented as natural frequencies (10 out of every 1,000) rather than as percentages.

But Gigerenzer wasn’t content to leave it at that. During an academic year at Stanford’s Center for Advanced Study in the Behavioral Sciences, in 1989–1990, he gave talks at Stanford (which had become Tversky’s academic home) and UC Berkeley (where Kahneman then taught) fiercely criticizing the heuristics-and-biases research program. His complaint was that the work of Kahneman, Tversky, and their followers documented violations of a model, Bayesian decision analysis, that was itself flawed or at best incomplete. Kahneman encouraged the debate at first, Gigerenzer says, but eventually tired of his challenger’s combative approach. The discussion was later committed to print in a series of journalarticles, and after reading through the whole exchange, it’s hard not to share Kahneman’s fatigue.

Gigerenzer is not alone, though, in arguing that we shouldn’t be too quick to dismiss the heuristics, gut feelings, snap judgments, and other methods humans use to make decisions as necessarily inferior to the probability-based verdicts of the decision analysts. Even Kahneman shares this belief to some extent. He sought out a more congenial discussion partner in the psychologist and decision consultant Gary Klein. One of the stars of Malcolm Gladwell’s book Blink, Klein studies how people—firefighters, soldiers, pilots—develop expertise, and he generally sees the process as being a lot more naturalistic and impressionistic than the models of the decision analysts. He and Kahneman have together studied  when going with the gut works and concluded that, in Klein’s words, “reliable intuitions need predictable situations with opportunities for learning.”

Are those really the only situations in which heuristics trump decision analysis? Gigerenzer says no, and the experience of the past few years (the global financial crisis, mainly) seems to back him up. When there’s lots of uncertainty, he argues, “you have to simplify in order to be robust. You can’t optimize any more.” In other words, when the probabilities you feed into a decision-making model are unreliable, you might be better off following a rule of thumb. One of Gigerenzer’s favorite examples of this comes from Harry Markowitz, the creator of the decision analysis cousin known as modern portfolio theory, who once let slip that in choosing the funds for his retirement account, he had simply split the money evenly among the options on offer (his allocation for each was 1/N). Subsequent research has shown that this so-called 1/N heuristic isn’t a bad approach at all.

Liked this? Go to part 3 (Current State).

Behavioral Economics on Fairness and Reciprocity

This post first appeared on in Spanish.

– Why do we need Laws?

– We need Laws in order to ensure that citizens are treated equally and fairly.

At least, that could be a possible short and straight-forward answer to this question. But what defines what is fair or which actions are morally right?

If you ask different people, they would give you different answers on what fairness and equality is, especially when we take into account unpredictable and complex factors and external forces such as their own character, the society they live in or their religion.

The answer to the opening question might come from an upcoming but very promising science which has changed the basic assumptions of standard Economics on how rational humans behave when it comes to Decision Making. This science is called Behavioral Economics.

The Ultimatum Game Experiment

Behavioral Game Theory is a theory which is based on experiments about how people react in different situations with regard to feelings, like fairness and reciprocity.

Let’s examine one of the most famous experiments of Game Theory.

In short, there are 2 players in this game. The first one, ‘the allocator’, is endowed with a sum of money (let’s say 10$) and she has to offer some of her money (more than 0$) to the second player, ‘the recipient’.

Now, the recipient has to make a decision, either to accept or reject that offer. The rules are simple, if the offer is accepted, both players take the money they won from the game, otherwise they both lose them.

Based on the standard economic theory it would be rational for the recipient to accept any offer because the gain would be always higher than if he rejected the offer. Moreover, it would be rational, as well, for the allocator to offer the smallest possible amount to the other player (say 1$).

Surprisingly (or not), based on numerous experiments, usually neither of the players behave that way!

Thus one of the basic assumptions of standard Economics, homo economicus (people are fully rational and act only upon their self-interest), is violated by this game.

This possibly happens due to 3 reasons:

  1. According to Herbert Gintis (2006), the allocator has a fear of rejection (on his offer) which makes him more self-protective and less profit-seeking.
  2. People prefer fairness and resist inequality. They are willing to forego a gain in order to prevent another person from receiving a superior reward. According το Ernst Fehr and Klaus M. Schmidt (1999) this phenomenon is called Inequity Aversion.
  3. According to Herbert Gintis (2009), human beings are cooperative by their nature and they tend to be altruistic cooperative in the game, as long as the other player acts in an altruistic way.

These findings are directly connected with another term of Behavioral Economics, Reciprocity. According to Ernst Fehr and Simon Gächter (2000):

“Reciprocity means that in response to friendly actions, people are frequently much nicer and much more cooperative than predicted by the self-interest model; conversely, in response to hostile actions they are frequently much more nasty and even brutal.”


Do those cooperative instincts of humans overcome other basic instincts like self-interest or feelings like anger?

Now, imagine a world guided only by the (partly) flawed human behavior, which changes according to their self-interest. It would probably be chaos.

That is what Laws are for; to help people divert from their gut feelings which are based on their own self-interests while they encourage a more altruistic human nature which has to do with fairness and equality.

To sum up, irrationality isn’t a bad thing. It totally depends on how behave in certain situations based on our individual and social motives.

Are Laws finally fulfilling their purpose?

What do you think?

Don’t Worry, BE happy!

Drone Policy (2/3): Understanding The Issues

If you missed “Drone Policy (1/3): Reducing The Human Cost“, click here.

Moral Disengagement and Euphemistic Language

Preeminent psychology researcher Dr. Albert Bandura notes that the aforementioned psychological processes are far from the only forms of moral disengagement apparent in military operations. For Bandura, moral disengagement encompasses the personal, behavioral, and environmental processes by which individuals enable themselves to violate their own moral norms while feeling ok about doing so.

The effects of depersonalization 

For example, most of us would feel guilt or remorse if we stole money out of someone’s hand. But it is a lot easier to justify pirating music online from “file sharing” sites because we can more readily convince ourselves that nobody is really being harmed by our actions. The depersonalized nature of online interactions, the abstract nature of the victim, and many other factors contribute to why the latter scenario seems more morally ambiguous than the personal and concrete nature of snatching a purse or wallet.

Agent-less vs Agentic phrases

Another mechanism of moral disengagement found in both civilian and military contexts is the use of euphemistic language. In his new book, “Moral Disengagement: How People Do Harm and Live with Themselves,” Bandura examines the literature on euphemistic language and how it is often used to “depersonalize doers from harmful activities,” (2016).  Research has shown that the acceptability of certain actions is influenced by what those actions are called, and using sanitized, agent-less, and technical terms instead of clear-cut, agentic, and common phrases enables us to do things we may not be comfortable with otherwise. “File-sharing” seems more justifiable than stealing in the same way that “servicing the target,” “visiting a sight” and “coercive diplomacy” seem more justifiable than bombing.

The Euphemistic Language of Drone Policy

In addition to his concerns about the depersonalized and abstract nature of drone operations, Bandura worries that using agent-less jargon like “unmanned aerial vehicles” contributes to the lack of individual accountability reported in drone operations. However, the more troubling use of euphemistic language in drone operations, revealed in The Drone Papers, comes from the way targets on the ground are classified by military officials thousands of miles away:

“The documents show that the military designated people it killed in targeted strikes as EKIA — “enemy killed in action” — even if they were not the intended targets of the strike. Unless evidence posthumously emerged to prove the males killed were not terrorists or “unlawful enemy combatants,” EKIA remained their designation, according to the source. That process, he said, “is insane. But we’ve made ourselves comfortable with that. The intelligence community, JSOC, the CIA, and everybody that helps support and prop up these programs, they’re comfortable with that idea.”

Weighing up moral rightness vs operational success

From a moral and ethical standpoint, classifying potentially innocent victims as “enemies” by default is reprehensible; however, from an operational standpoint it makes perfect sense. Operators would be much more hesitant to pull the trigger if they were completely aware of how often they had killed innocent civilians during their missions. Bandura and others note the stress drone operators report despite being removed from the front lines and that, “Having to turn one’s morality off and on, day in and day out, between lethal air strikes and prosocial home life makes it difficult to maintain a sense of moral integrity,” (2016). Officials in the intelligence community may justify their use of these designations as a way to protect the already strained psyches of their drone teams.

Feedback Loop of Moral Disengagement in Drone Policy

An unintended consequence of these euphemisms is the implicit message that’s been conveyed down the chain of command: collateral damage isn’t a concern. Former drone operator and instructor Michael Haas claims that he was punished when he failed a student on a training mission in which the student insisted his targets were suspicious despite having no evidence to back the judgment:  


The AI Governance Challenge

“Short on operators, his superiors asked him to explain his decision. “I don’t want a person in that seat with that mentality with their hands on those triggers,” Haas says. “It’s a dangerous scenario for everyone to have someone with that bloodlust.” But the student’s detached outlook wasn’t as important as training new recruits. Hass was ultimately punished for failing the student and barred from teaching for 10 days.”

On some level Haas’ superiors surely want to limit the amount of civilians killed in their attacks, but the euphemistic language on which their policy objectives are gauged allows them to dismiss Haas’ concerns and carry on with training a potentially dangerous recruit. When collateral damage is a hidden statistic, there’s no reason to be concerned when looking at the stat sheet: sanitizing the language of war continually enables fatal mistakes to be overlooked or go unpunished.

It is problematic that individuals within drone bureaucracies are morally disengaging while on the job and maintain the policy status quo, but the bigger problem is that the policies and systems of drone warfare internally manufacture these kinds of moral disengagements where they may not arise otherwise.

Liked this? Read part 3. 

Bridging The Divide Between Decision Science And Policy

This article originally appeared in Behavioral Policy and belongs to the creators.

There are many stories of behavioral scientists who are resourceful, entrepreneurial, determined, and idealistic can successfully push their ideas into policy and practice. However, the vast majority of rank-and-file scientists lack the resources, time, access, and incentives to directly influence policy decisions. Meanwhile, policymakers and practitioners are increasingly receptive to behavioral solutions but may not know how to discriminate good from bad behavioral science. A better way of bridging this divide between behavioral scientists and policymakers is urgently needed. The solution, we argue, requires behavioral scientists to rethink the way they approach policy applications of their work, and it requires a new vehicle for communicating their insights.

Rethinking the approach to decision science research

Behavioral scientists interested in having real-world impact typically begin by reflecting on consistent empirical findings across studies in their research area and then trying to generate relevant applications based on a superficial understanding of relevant policy areas. We assert that to have greater impact on policymakers and other practitioners, behavioral scientists must work harder to first learn what it is that practitioners need to know. This requires effort by behavioral scientists to study the relevant policy context—the institutional and resource constraints, key stakeholders, results of past policy initiatives, and so forth—before applying behavioral insights. In short, behavioral scientists will need to adopt a more problem-driven approach rather than merely searching for applications of their favorite theories.

This point was driven home to us by a story from David Schkade, a professor at the University of California, San Diego. In 2004, Schkade was named to a National Academy of Sciences panel that was tasked with helping to increase organ donation rates. Schkade thought immediately of aforementioned research showing the powerful effect of defaults on organ donation consent. Thus, he saw an obvious solution to organ shortages: Switch from a regime in which donors must opt in (for example, by affirmatively indicating their preference to donate on their driver license) to one that requires people to either opt out (presume consent unless one explicitly objects) or at least make a more neutral forced choice (in which citizens must actively choose whether or not to be a donor to receive a driver’s license).

As the panel deliberated, Schkade was surprised to learn that some states had already tried changing the choice regime, without success. For instance, in 2000, Virginia passed a law requiring that people applying for driver’s licenses or identification cards indicate whether they were willing to be organ donors, using a system in which all individuals were asked to respond (the form also included an undecided category; this and a nonresponse were recorded as unwillingness to donate). The attempt backfired because of the unexpectedly high percentage of people who did not respond yes.1,2

As the expert panel discussed the issue further, Schkade learned that a much larger problem in organ donation was yield management. In 2004, approximately 13,000–14,000 Americans died each year in a manner that made them medically eligible to become donors. Fifty-nine different organ procurement organizations (OPOs) across the United States had conversion rates (percentage of medically eligible individuals who became donors in their service area) ranging from 34% to 78%.1 The panel quickly realized that getting lower performing OPOs to adopt the best practices of the higher performing OPOs—getting them to, say, an average 75% conversion rate—would substantially address transplant needs for all major organs other than kidneys. Several factors were identified as contributing to variations in conversion rates: differences in how doctors and nurses approach families of potential donors about donation (family wishes are usually honored); timely communication and coordination between the hospitals where the potential donors are treated, the OPOs, and the transplant centers; the degree of testing of the donors before organs are accepted for transplant; and the speed with which transplant surgeons and their patients decide to accept an offered organ. Such factors, it turned out, provided better opportunities for increasing the number of transplanted organs each year. Because almost all of the identified factors involve behavioral issues, they provided new opportunities for behavioral interventions. Indeed, since the publication of the resulting National Academy of Sciences report, the average OPO conversion rate increased from 57% in 2004 to 73% in 2012.3

Extrapolating decision science research to the real world

The main lesson here is that one cannot assume that even rigorously tested behavioral scientific results will work as well outside of the laboratory or in new contexts. Hidden factors in the new applied context may blunt or reverse the effects of even the most robust behavioral patterns that have been found in other contexts (in the Virginia case, perhaps the uniquely emotional and moral nature of organ donation decisions made the forced choice regime seem coercive). Thus, behavioral science applications urgently require proofs of concept through new field tests where possible. Moreover, institutional constraints and contextual factors may render a particular behavioral insight less practical or less important than previously supposed, but they may also suggest new opportunities for application of behavioral insights.


The AI Governance Challenge

A second important reason for field tests is to calibrate scientific insights to the domain of application. For instance, Sheena Iyengar and Mark Lepper famously documented choice overload, in which too many options can be debilitating. In their study, they found that customers of an upscale grocery store were much more likely to taste a sample of jam when a display table had 24 varieties available for sampling than when it had six varieties, but the customers were nevertheless much less likely to actually make a purchase from the 24-jam set.4 Although findings such as this suggest that providing consumers with too many options can be counterproductive, increasing the number of options generally will provide consumers with a more attractive best option. The ideal number of options undoubtedly varies from context to context,5 and prior research does not yet make predictions precise enough to be useful to policymakers. Field tests can therefore help behavioral scientists establish more specific recommendations that will likely have greater traction with policymakers.

Effectively communicating decision science insights

Although a vast reservoir of useful behavioral science waits to be repurposed for specific applications, the kind of research required to accomplish this goal is typically not valued by high-profile academic journals. Most behavioral scientists working in universities and research institutes are under pressure to publish in top disciplinary journals that tend to require significant theoretical or methodological advances, often requiring authors to provide ample evidence of underlying causes of behavior. Many of these publications do not reward field research of naturally occurring behavior,5 encourage no more than a perfunctory focus on practical implications of research, and usually serve a single behavioral discipline. There is therefore an urgent need for new high-profile outlets that publish thoughtful and rigorous applications of a wide range of behavioral sciences—and especially field tests of behavioral principles—to increase the supply of behavioral insights that are ready to be acted on.

On the demand side, although policymakers increasingly are open to rigorous and actionable behavioral insights, they do not see much research in a form that they can use. Traditional scientific journals that publish policy-relevant work tend to be written for experts, with all the technical details, jargon, and lengthy descriptions that experts expect but busy policymakers and practitioners cannot decipher easily. In addition, this work often comes across as naive to people creating and administering policy. Thus, new publications are needed that not only guarantee the disciplinary and methodological rigor of research but also deliver reality checks for scientists by incorporating policy professionals into the review process. Moreover, articles should be written in a clear and compelling way that is accessible to nonexpert readers. Only then will a large number of practitioners be interested in applying this work.

Evolution Of Decision Making (3/3): Current State

This article series tells the story of where the different streams arose and how they have interacted, beginning with the explosion of interest in the field during and after World War II (for a longer view, see “A Brief History of Decision Making,” by Leigh Buchanan and Andrew O’Connell, HBR, January 2006). The goal is to make you a more informed consumer of decision advice—which just might make you a better decision maker.

This article originally appeared in [] and belongs to the creators.

Missed either of the first two parts? Go back to Part 1 (The Rational Revolution), or Part 2 (Irrationality’s Revenge).

The State of the Art

The Kahneman-Tversky heuristics-and-biases approach has the upper hand right now, both in academia and in the public mind. Aside from its many real virtues, it is the approach best suited to obtaining interesting new experimental results, which are extremely helpful to young professors trying to get tenure. Plus, journalists love writing about it.

Decision analysis hasn’t gone away, however. HBS dropped it as a required course in 1997, but that was in part because many students were already familiar with such core techniques as the decision tree. As a subject of advanced academic research, though, it is confined to a few universities—USC, Duke, Texas A&M, and Stanford, where Ron Howard teaches. It is concentrated in industries, such as oil and gas and pharmaceuticals, in which managers have to make big decisions with long investment horizons and somewhat reliable data. Chevron is almost certainly the most enthusiastic adherent, with 250 decision analysts on staff. Aspects of the field have also enjoyed an informal renaissance among computer scientists and others of a quantitative bent. The presidential election forecasts that made Nate Silver famous were a straightforward application of Bayesian methods.

Those who argue that rational, optimizing decision making shouldn’t be the ideal are a more scattered lot. Gigerenzer has a big group of researchers at the Max Planck Institute for Human Development, in Berlin. Klein and his allies, chiefly in industry and government rather than academia, gather regularly for Naturalistic Decision Making conferences. Academic decision scholars who aren’t decision analysts mostly belong to the interdisciplinary Society for Judgment and Decision Making, which is dominated by heuristics-and-biases researchers. “It’s still very much us and them, where us is Kahneman-and-Tversky disciples and the rest is Gerd and people who have worked with him,” says Dan Goldstein, a former Gigerenzer student now at Microsoft Research. “It’s still 90 to 10 Kahneman and Tversky.” Then again, Goldstein—a far more diplomatic sort than his mentor—is slated to be the next president of the society.
There seems to be more overlap in practical decision advice than in decision research. The leading business school textbook, Judgment in Managerial Decision Making, by Harvard’s Max Bazerman (and, in later editions, UC Berkeley’s Don Moore), devotes most of its pages to heuristics and biases but is dedicated to the decision analyst Howard Raiffa and concludes with a list of recommendations that begins, “1. Use decision analysis tools.” There’s nothing inconsistent there—the starting point of the whole Kahneman-and-Tversky research project was that decision analysis was the best approach. But other researchers in this tradition, when they try to correct the decision-making errors people make, also find themselves turning to heuristics.

One of the best-known products of heuristics-and-biases research, Richard Thaler and Shlomo Benartzi’s Save More Tomorrow program, replaces the difficult choices workers face when asked how much they want to put aside for retirement with a heuristic—a commitment to automatically bump up one’s contribution with every pay raise—that has led to dramatic increases in saving. A recent field experiment with small-business owners in the Dominican Republic found that teaching them the simple heuristic of keeping separate purses for business and personal life, and moving money from one to the other only once a month, had a much greater impact than conventional financial education. “The big challenge is to know the realm of applications where these heuristics are useful, and where they are useless or even harm people,” says the MIT economist Antoinette Schoar, one of the researchers. “At least from what I’ve seen, we don’t know very well what the boundaries are of where heuristics work.”

This has recently been a major research project for Gigerenzer and his allies—he calls it the study of “ecological rationality.” In environments where uncertainty is high, the number of potential alternatives many, or the sample size small, the group argues, heuristics are likely to outperform more-analytic decision-making approaches. This taxonomy may not catch on—but the sense that smart decision making consists of a mix of rational models, error avoidance, and heuristics seems to be growing.

Other important developments are emerging. Advances in neuroscience could change the decision equation as scientists get a better sense of how the brain makes choices, although that research is in early days. Decisions are increasingly shunted from people to computers, which aren’t subject to the same information-processing limits or biases humans face. But the pioneers of artificial intelligence included both John von Neumann and Herbert Simon, and the field still mixes the former’s decision-analysis tools with the latter’s heuristics. It offers no definitive verdict—yet—on which approach is best.

Making Better Decisions

So, what is the right way to think about making decisions? There are a few easy answers. For big, expensive projects for which reasonably reliable data is available—deciding whether to build an oil refinery, or whether to go to an expensive graduate school, or whether to undergo a medical procedure—the techniques of decision analysis are invaluable. They are also useful in negotiations and group decisions. Those who have used decision analysis for years say they find themselves putting it to work even for fast judgments. The Harvard economist Richard Zeckhauser runs a quick decision tree in his head before deciding how much money to put in a parking meter in Harvard Square. “It sometimes annoys people,” he admits, “but you get good at doing this.”

A firefighter running into a burning building doesn’t have time for even a quick decision tree, yet if he is experienced enough his intuition will often lead him to excellent decisions. Many other fields are similarly conducive to intuition built through years of practice—a minimum of 10,000 hours of deliberate practice to develop true expertise, the psychologist K. Anders Ericsson famously estimated. The fields where this rule best applies tend to be stable. The behavior of tennis balls or violins or even fire won’t suddenly change and render experience invalid.

Management isn’t really one of those fields. It’s a mix of situations that repeat themselves, in which experience-based intuitions are invaluable, and new situations, in which such intuitions are worthless. It involves projects whose risks and potential returns lend themselves to calculations but also includes groundbreaking endeavors for which calculations are likely to mislead. It is perhaps the profession most in need of multiple decision strategies.


The AI Governance Challenge

Part of the appeal of heuristics-and-biases research is that even if it doesn’t tell you what decision to make, it at least warns you away from ways of thought that are obviously wrong. If being aware of the endowment effect makes you less likely to defend a declining business line rather than invest in a new one, you’ll probably be better off.

Yet overconfidence in one’s judgment or odds of success—near the top of most lists of decision-making flaws—is a trait of many successful leaders. At the very cutting edge of business, it may be that good decision making looks a little like the dynamic between Star Trek’s Captain Kirk and Mr. Spock, with Spock reciting the preposterously long odds of success and Kirk confidently barging ahead, Spock still at his side.

Signs That The Government Is Embracing Behavioral Science

This article originally appeared in and belongs to the creators.

Obama’s Executive Order

For anyone interested in human behavior and decision making, September 15 will likely be a day to remember. On that day, President Obama ordered government agencies to use behavioral science insights to “better serve the American people.” In his executive order, Obama instructed federal agencies to identify policies and operations where applying findings from behavioral science could improve “public welfare, program outcomes, and program cost effectiveness,” design strategies for using behavioral science insights, and recruit behavioral experts whenever considered necessary or helpful. (Here is the full report by the White House Social and Behavioral Science Team, which discusses some of the work that has been already conducted using behavioral insights.)

This order reflects the evidence that scholars across a variety of fields — from behavioral economics to psychology to behavioral decision research — have accumulated in recent years that people often fail to make rational choices. Across a wide range of contexts, we often make foolish decisions that go against our self-interest. We exercise too little and eat too much. We spend too much, don’t save enough, and wind up heavily in debt.

Such deviations from rationality, well documented in the decision-making literature, are consistent across time and populations. For example, the typical person would dislike losing $50 more than he would enjoy gaining $50, which would not be the case if he were fully rational. And when making decisions, people tend to give disproportionate weight to information that readily comes to mind (a recent discussion, for example) and overlook more pertinent information that is harder to retrieve from memory. Again, this shouldn’t happen to so-called “rational agents.”

The tradition assumption of rationality in public policy

Public policy has often relied on assumptions of rationality when accounting for human behavior, which has led to suboptimal policies in the past. For example, citizens are sometimes bombarded by mass-media campaigns (designed to decrease smoking, increase seat-belt use, etc.) that assume they will be able to process an onslaught of messages to their best advantage. But such campaigns often have not worked, and may even have backfired at times.

A changing landscape

Over the last decade or so, insights from behavioral science have been applied to public policy issues such as tax payments, medical decisions, consumer health and wellness, and climate-change mitigation. Consider work conducted by theBehavioural Insights Team (BIT), an organization set up in the United Kingdom to apply “nudges” to improve government policy and services. (A nudge, a term introduced by Richard Thaler and Cass Sunstein in their 2008 book Nudge: Improving Decisions About Health, Wealth, and Happiness, is any aspect of a process that changes how people behave in predictable ways “without forbidding any options or significantly changing their economic incentives.”)

Behavioral science in action

For example, in one study, the BIT partnered with the U.K. Driver and Vehicle Licensing Agency to change the wording of the letter sent to people who were delinquent in paying their vehicle taxes. Departing from the complex legal language of the existing letter, the new letter in effect told people to “pay your tax or lose your car.” To make the demand more personal, some of the letters also included a photo of the car in question. The rewritten letter increased the number of people paying the tax; the rewrite with the photo changed behavior even more dramatically.

Another successful nudge (not involving BIT) involved sending letters to residential users of high amounts of energy in San Marcos, Calif. To influence them to consume less energy, the letters told them how their consumption compared with that of their neighbors. Finding out that they were consuming more than others like them triggered strong negative emotions that in turn led to behavioral changes and a 10% reduction in energy consumption.

Nudges like these speak to the power of developing interventions and policies that consider people for what they are: creatures whose information-processing capacity and emotions limit them from being rational agents. Well-designed behavioral studies can offer policymakers useful insights into human behavior that can improve policies. Such studies are applicable to a wide range of policy areas, wherever human behavior plays a role.

Behavioral science at the organizational level

Similarly, organizations can identify more effective management practices through a better understanding of human behavior. The implications could be wide ranging, from helping employees adopt healthier habits to increasing their happiness and productivity at work.

In its cafeterias, Google has experimented with this idea to encourage employees to adopt healthier eating habits. When “Googlers” reach for a plate, they encounter a sign informing them that people with bigger plates are inclined to eat more than those with smaller plates. Thanks to this simple change, the proportion of people using small plates has increased by 50%.

Or consider how simple interventions can increase employee happiness and productivity. Lalin Anik, Lara Aknin, Michael Norton, Elizabeth Dunn, and Jordi Quoidbach conducted a series of field experiments in which they found that when employees share their bonuses with coworkers and charities, they are more satisfied and perform at a higher level than those who don’t. Giving employees opportunities to spend money on others increases happiness, job satisfaction, and team performance, their research discovered.


The AI Governance Challenge

As another example, a few years ago, my colleagues and I conducted a study in collaboration with a major U.S. car insurance company. We sent 13,488 of the company’s customers a form that asked them to report the number of miles they had driven the prior year, as indicated on their cars’ odometers. Cheating by under-reporting mileage would come with the financial benefit of lower insurance premiums. On about half of the forms sent out, customers were supposed to sign to indicate their truthfulness at the bottom of the page. The other half of the forms asked the customers to sign at the top of the page. The average mileage reported by customers who signed the form at the top was more than 2,400 miles higher than that reported by those who signed at the bottom. The simple change put customers in a more honest mindset.

Behavioral science can help managers design new practices, suggest improvements to existing ones, or provide ex-post explanations of why people reacted in a particular way. In short, using insights from behavioral science can have profound benefits across government and business, and more are being implemented every day. So, are you ready for a nudge?

How To Create Lasting Change

This article originally appeared in Triple Pundit and belongs to the creators.

While change is hard, Chip Heath pointed out that people undergo major life changes all the time, and even like it (his examples were marriage and having kids).

Human decision making is like a tiny rider on a massive elephant.  The rider may think he’s in charge, but the elephant’s will always wins. Both are imperfect – the rider over-thinks and over-analyzes.  The elephant acts on passion and emotion. Heath’s advice for causing change was three-pronged:

  1. Direct the rider
  2. Motivate the elephant
  3. Shape the path

1) Direct the rider:

Humans obsess about problems to a fault and spend very little time analyzing what’s right, say, in a relationship.  Heath explained how focusing on bright spots rather than issues can be transformational.  Let’s study what’s working and do more of that.  He gave an example of Donald Berwick at the Institute for Healthcare Improvement who aimed to save 100,000 lives by a certain date, and exceeded his goal simply by looking at what medical practices worked and spreading them across healthcare facilities.

2) Motivate the elephant:

People are emotional and often react better to a good story than heaps of data.  Tell a story and allow your listeners to draw their own conclusions (which ideally match up with yours).  In a vivid example, Heath described a procurement officer who wanted to overhaul his company’s supply chain for greater efficiency.  Rather than say that, or bombard his team with data on the problem, he chose one item — gloves worn by the manufacturing team — and noticed that the company purchased 424 kinds of gloves.  He got one of each and placed them in a mound on the conference table and then invited his team in.  Without saying a word, they began to proclaim “This is crazy! We can fix this so easily!” — which was exactly what the procurement officer wanted to do.  He invited his colleagues to see, feel, and then change the problem.

Interestingly, Heath pointed out that the environmental movement has got us all saying, “This is crazy!” but no one is quite at the point of saying, “And we can fix it!” And that’s a problem.


The AI Governance Challenge

3) Shape the path:

Make change easy.  Manipulate the situation and the environment such that the desired behavior is frictionless.  Amazon’s 1 click purchasing button is a great example of removing all barriers between the customer and the goal.  If you are trying to drive change, have you removed every single barrier between the people who aim to change and the actions you want them to take?  “What looks like a people problem is often a situation problem,” Heath explained.  The clearer your ask, the higher the likelihood that people will comply.  Giving students a map and specific directions about donating a can of food increased their likelihood of compliance from 8% to 42% in the most kind students, and 0% to 25% in the least kind students.

So to recap:

  • Direct the rider – study the bright spots and replicate
  • Motivate the elephant – use emotional levers
  • Shape the path – make change easy

How does this apply to your work? How can you enhance the chance that people will change their behaviors using these simple guidelines?

3 Applications Of Behavioral Economics In The Real World

As a ‘newbie’ economist, with a vast theoretical background, thirst for knowledge, and impatience to make an impact on the world, Behavioral Economics became my main interest and passion.

The problem is, most professionals, especially supervisors and even HR specialists, cannot really understand how Behavioral Economics apply to their work routine. There are times that even I find myself trapped in the same frequency of thought.

Every company is looking for ways to reduce their cost-and-time-spent-versus-profit ratio. In order to achieve this, they need insights, and use market research and data analysis as their main tools.

A Behavioral Economist is useful in both areas. That’s the primary reason why my main mission is to figure out the application of Behavioral Economics in the commercial world.

Let’s dive into three useful applications:

1) Small Data Is The New Big Data

The research of ‘Nextstage Evolution’ concluded, as referred in their Facebook and LinkedIn profiles:

Companies are realizing ‘big data’ isn’t as useful as they were told, and that smaller, precise data sets answer questions quicker and cheaper.

In that case, a Behavioral Economist can help companies reduce their costs and time spent on ‘big data’. In their research, they could find which variables lead to these precise ‘small data’. Their ability of separating data is reflected through a Behavioral Economics tool, the ‘conceptual models’.

Behavioral scientist Alain Samson, editor of ‘The Behavioral Economics Guide 2015’, suggests in his guide that these models are used to identify consumer groups, classified by their needs and wants.

The most important factor, in that case, is human psychology.

In order to classify those groups, Behavioral Economists analyze descriptive characteristics, such as gender, income, age and education, and behavioral dimensions, such as benefits, usage rates and loyalty status.

Behavioral economists could effectively apply the analysis and understanding of consumer behavior, as well.

It includes the consumer’s behavior when face-to-face with the seller, or the pre-purchase behavior of the consumer, according to the available information collected up to that point. More emphasis could also be given to the post-purchase outcomes and reactions of the existing consumers, in order to evoke positive feelings (about the performance of the good) that will lead to additional word of mouth and loyalty on the company’s brand.

2) Social Norms And Herd Insticts

The ‘nudge theory’ was coined and popularized in the 2008 book, ‘Nudge: Improving Decisions about Health, Wealth and Happiness’, written by American academics Richard H. Thaler and Cass R. Sustein.

According to the authors (2008, page 6):

A nudge […] is any aspect of the choice architecture that alters people’s behavior in a predictable way, without forbidding any options or significantly changing their economic incentives. To count as a mere nudge, the intervention must be easy and cheap to avoid. Nudges are not mandates. Putting the fruit at eye level counts as a nudge. Banning junk food does not.

Economists and psychologists still argue over the application of this theory. However, its essence can be practically used by Behavioral Economists, especially when it comes to behavioral science terms, such as ‘social norms’ or ‘social norming’.

According to Alan D. Berkowitz PhD, ‘The Social Norms Approach: Theory, Research and Annotated Bibliography’ (2004, page 5):

The social norms approach states that our behavior is influenced by incorrect perceptions of how other members of our social groups think and act.

According to Thaler (2008, p.182) in real life:

People like to do what most people think it is right to do; people like to do what most people actually do.

Taking the aforementioned insights into account, here is an example of how Behavioral Economists and marketers could make a point by applying ‘social norming’: “9 Out Of 10 People Pay Their Taxes On Time”.


The AI Governance Challenge

3) Big Risks – Big Wins

Behavioral Economists with a PhD degree have the ability to conduct behavioral and decision-making research in their labs. Despite being a great research tool, there is a main disadvantage (Alain Samson, 2015, p.15): the lack of external validation (coming outside of the lab).

Research and tests in real world scenarios are often used to tackle this problem, although they tend to be more expensive that those in a lab. Economics and investments involve a lot of inherent risk; running expensive, time-consuming, out-of-the-lab experiments might be an additional risk a company has to take in order to improve their performance and outmatch their competition.

According to the Indian Institute of Technology Kharagpur (Diploma in Applied Psychology – Consumer Behavior, ALISON – Free online learning), consumer behavior understanding, thus Behavioral Economics, is an interdisciplinary science, because it is based on the fields of Economics, Psychology, Sociology, Social Psychology and Anthropology.

Therefore, Behavioral Economists could work as full-stack analysts, dealing with a wide variety of projects, depending on the company’s needs. Consequently, companies could potentially save in terms of both human resources and money.