Why do our preferences change depending on whether we judge our options together or separately?

The Less-is-Better Effect

, explained.
Bias

What is the Less-is-better Effect?

The Less-is-better Effect describes how people sometimes prefer the worse of two options, but only when the options are presented separately. When people consider both their choices together, their preferences reverse, so that the less-is-better effect disappears.

Where this bias occurs

Imagine your birthday is coming up, and your friends, Andrew and Amy, each give you a present. Andrew gives you a book—a new release that’s only out in hardcover, making it a bit pricey for a book, at $35. “Wow,” you think, “What a generous gift!”

Some time later, when you see Amy, she gives you a new jacket, which you happen to know costs $45. You like it, and it fits, but you can’t help but feel like $45 is pretty cheap for a piece of outerwear. “I didn’t realize Amy was this stingy,” you think to yourself.

Debias Your Organization

Most of us work & live in environments that aren’t optimized for solid decision-making. We work with organizations of all kinds to identify sources of cognitive bias & develop tailored solutions.

Learn about our work

Individual effects

As the example above illustrates, in our daily lives, the less-is-better effect probably rears its head most often in situations that involve giving or receiving gifts, skewing our perceptions of value and generosity. But it also has implications for our lives as consumers: The less-is-better effect can affect our willingness-to-pay (WTP) for various products, making us prone to overpaying for items that are of relatively poor quality. It may also lead us to devalue items that are objectively more valuable simply because of the way they are framed, or because of minor imperfections.

Systemic effects

Marketers or vendors might try to take advantage of the less-is-better effect by providing only one option in a given category and charging more than it’s worth. The less-is-better effect can also have implications in situations where the stakes are higher than just making a purchase, such as during the hiring process.

Why it happens

The less-is-better effect was coined by Christopher Hsee, a behavioral scientist at the University of Chicago. In one of his experiments on this bias, Hsee had participants imagine that they were spending a summer’s day at the beach and that there was an ice cream vendor working nearby. One group of participants were told that the vendor was serving 8 oz scoops of ice cream in 10 oz cups, while the other group was told that the scoops were 7 oz served in a 5 oz cup. After being given this information, people were asked to write down how much they would be willing to pay for a serving.

The results showed that, counterintuitively, people were willing to pay more for the smaller, overfilled cup of ice cream than they were for the larger, underfilled one. However, if people were told to imagine that there were two vendors on the beach, selling both of the above options, this effect disappeared.2

The ice cream example illustrates a couple of key points about the less-is-better effect. For one, when people decide to go for the lesser option, they do so because in context, it is somehow more appealing. Although the larger ice cream scoop is objectively a better option than the small one (assuming that you’re trying to maximize your ice cream consumption, like any reasonable person), when it’s served in a cup that’s too large, people feel like they’re getting ripped off. In contrast, even though the small scoop amounts to less ice cream, when it’s stuffed into an even smaller cup, it looks like you’ve been given a generous portion.

The second thing to underline here is that the less-is-better effect involves a kind of preference reversal (PR): When options are presented together instead of in isolation, people no longer go for the inferior pick. This interesting finding is a hint that we approach decisions differently depending on how many options are made available to us, and how those options are presented. Sure enough, research has turned up a few different reasons why our cognitive approach differs across these contexts.

Our perception of the world is context-dependent

The less-is-better effect is one of many examples of how our judgment can be swayed by contextual or environmental factors. Although we tend to think of ourselves as rational decision makers, time and time again, research has shown that the way we process information is highly dependent on how it’s presented to us.

One of the most famous demonstrations of this fact, coined by the highly influential behavioral economists Daniel Kahneman and Amos Tversky, is known as framing. In a famous study, participants were told to imagine that the US was preparing for the outbreak of a new disease that was expected to kill 600 people. They were then told to pick between two options:

  • If Program A is adopted, 200 people will be saved.
  • If Program B is adopted, there is a one-third probability that 600 people will be saved and a two-thirds probability that no people will be saved.

Given these two options, 72% picked Program A, and 28% picked Program B. However, to another group of participants, Kahneman and Tversky presented two different options:

  • If Program C is adopted, 400 people will die.
  • If Program D is adopted, there is a one-third probability that nobody will die and a two-thirds probability that 600 people will die.1

The trick here is that these options are actually the same as the first two—Programs A and C both result in 200 survivors and 400 dead, and Programs B and D both carry the same chances of success and failure. And yet, the change in wording led to very different results: only 22% picked Program C, and 78% picked Program D.

The different responses stem from how the choices were framed: in terms of gains (people saved) or losses (people not saved). When options are framed in terms of gains, people become risk-averse, choosing Program A because it is a safer bet than Program B. However, when framed in terms of losses, people become more risk-seeking, choosing to gamble on Program D rather than accept the guaranteed loss of Program C.

The point here is that people often respond to the same piece of information very differently, depending on how it’s put to them. This is helpful for understanding the less-is-better effect, but isn’t quite sufficient, since it doesn’t explain why this effect only happens when we evaluate our options separately. Why do people’s preferences reverse when they consider their choices altogether?

Some attributes are harder to judge

One reason for the less-is-better effect is known as the evaluability hypothesis. The crux of this theory is that we have a harder time evaluating some attributes than others,3 and when we’re trying to judge objects in isolation, our impressions are much more heavily influenced by features that are easily evaluated.

The main reason that an attribute might be difficult to evaluate is that we lack distribution information for it: we don’t know what its average value is, or what its highest and lowest possible values are. For example, if you were trying to decide whether or not to buy a specific kind of car, and the salesperson told you the model’s horsepower, that information wouldn’t be very useful unless you knew the horsepower of the average car, or how big a difference there is in horsepower between a sports car and a lower-performing vehicle.

Often, we will try to simplify the decision making process by thinking of objects in relative terms. According to norm theory, when it’s difficult to evaluate an object in isolation, people tend to think about other objects in the same category to use as a reference point. For example, when you receive a hardcover book from your friend, the fact that it cost $35 only becomes meaningful when it’s compared to the price of other books. In order to judge this gift, you would probably (automatically) think of the “average” book, and use that as a reference point. This lets you judge the relative position of the book in its own category, a much easier task.2 But since you’re now relying on the item’s relative standing instead of its absolute value, this can lead to a biased judgment. For instance, even though $45 might be inexpensive for a coat, it’s still an objectively more generous gift than a $35 book.

Some features stand out more than others

Another, somewhat simpler factor that drives the less-is-better effect is salience: how much certain aspects of an object stand out. Salience is a powerful determinant of how we perceive the world, biasing our attention towards more flashy, attention-grabbing information. We also tend to rely on highly salient information to guide our decision making, more so than we do on mundane details.

In many demonstrations of the less-is-better effect, people’s impressions of the objectively superior option are often handicapped by a highly salient, negative feature. One example

Why it is important

The less-is-better effect has implications for how we should approach our decisions as consumers. As the evidence has shown, we often respond very differently to the same item depending on whether we’re encountering it on its own, or along with other options. When we have other reference points to compare an object to, we can make an informed judgment about its quality—but if it’s on its own, we’re prone to overestimating how valuable it is, and we might end up overpaying for it.

This research also shows how focusing too much on imperfections can skew our judgment when trying to choose between alternatives. In the dinnerware study, the large set may have contained some broken pieces, but it was still significantly more valuable than the smaller, mint-condition one. Our inability to look past things like this might lead us to pass up perfectly good options.

Beyond costing us money, the less-is-better effect might bias our decision making in scenarios where there’s more at stake. For example, in another study conducted by Christopher Hsee, participants evaluated two hypothetical job candidates for a computer programmer position, either separately or jointly. Candidate A had written 70 computer programs and had a college GPA of 2.5, while Candidate B had written 10 computer programs but had a GPA of 4.9.

In practice, Candidate A would probably be a better fit for the role—even though their GPA is lower, they have much more practical experience to help them do the job. When participants in the experiment judged both candidates together, they understood this, and tended to pick Candidate A. But when the candidates were judged separately, the opposite pattern emerged. This is because most people don’t know the distribution information for computer programming experience: without familiarity in this area, it’s hard to make sense of numbers like 10 or 70 programs. Instead, people fall back on the more easily judged attribute of GPA.These findings suggest that employers might be able to improve the quality of their decision making by assessing candidates jointly, as in a group interview, rather than separately.

How to avoid it

The best strategy to get around the less-is-better effect is also very simple: be a comparison shopper. As we have seen, this bias only applies in situations where we are only presented with one option at a time. Whenever possible, try to find information about alternatives, and compare your options side-by-side before committing to either one of them. This will prevent you from ignoring certain attributes in favor of others that are easier to evaluate and will make for a more balanced decision.

How it all started

The less-is-better effect was coined by Christopher Hsee, a behavioral scientist at the University of Chicago. Similar phenomena had been studied by other researchers previously: for example, in a 1995 paper by Victoria Medvec, Scott Madey, and Thomas Gilovich, athletes were found to be less happy after winning a silver medal than they were after they won bronze.5 This finding, and others, demonstrated that people often held preferences that were counterintuitive, or seemed to respond more positively to inferior options. However, the less-is-better effect is thought to rely on different mechanisms than other, similar effects, and it stands out because of the fact that it involves a preference reversal when options are presented jointly instead of separately.2

Example 1 - The Linda problem

In the 1980s, the legendary behavioral economists Daniel Kahneman and Amos Tversky introduced what would come to be known as “the Linda problem,” where they had people read this personality sketch:

Linda is thirty-one years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in antinuclear demonstrations.

Next, people were given eight possible descriptions of Linda’s occupation, which they ranked in order of their probability. The final three statements were, in order: “Linda is a bank teller”; “Linda is an insurance salesperson”; and “Linda is a bank teller and is active in the feminist movement.” To the researchers’ surprise, only 15% of people correctly said that it was less likely for Linda to be a feminist bank teller than merely a bank teller. Everybody else committed what is known as the conjunction fallacy: mistakenly believing that there’s a higher chance of two events occurring together than there is of either one occurring on its own. (In reality, this is never true.) This was true even for highly educated people.

In “increasing desperate” attempts to elicit the right answer from people, Kahneman and Tversky tried a shortened version of the Linda problem, asking people to indicate which was more likely: that Linda was a bank teller, or a feminist bank teller. When people were shown the two options together, it still didn’t eliminate the error entirely, but (at least for educated groups) it sharply reduced it: in one of their studies, 64% of graduate students got the correct answer. This mirrors the less-is-better effect: when people evaluated Linda’s possible occupations separately, they relied more on easy-to-use characteristics to make their judgments, but did so a little bit less when the alternatives were shown together.4

Example 2 - Marketing and the less-is-better effect

This bias has implications for how marketers should approach measuring customer satisfaction. As Christopher Hsee writes in one of his papers on the less-is-better effect, in some cases, marketers might have two versions of a product that they want to test. As research has shown, they might obtain different results depending on whether they have one group of people sample both products, or if they have two groups each sample one version.2

Surprisingly, Hsee’s recommendation is to have each version evaluated separately, even though separate evaluation makes people vulnerable to the less-is-better effect. Why? Because once a product is on the market, customers won’t have the option of jointly evaluating it along with the other version that wasn’t selected. It makes more sense to go with the option that performs best in separate evaluations, because that’s how consumers will experience it too.

Summary

What it is

The less-is-better effect describes how people sometimes make suboptimal choices when they’re evaluating options in isolation. People’s preferences reverse when evaluating their choices jointly.

Why it happens

This bias happens because, when we are making choices between alternatives, our judgment is sensitive to context, as demonstrated by the framing effect. The less-is-better effect is also rooted in the evaluability hypothesis, which says that we tend to rely more on easily judged attributes when making a decision, and norm theory, which says that we tend to think about average or “prototypical” members of an object’s category in order to judge that object. This allows us to evaluate the item in terms of its relative standing within its category, rather than in absolute terms.

Example 1 – The Linda problem and the less-is-better effect

The Linda problem is a famous example from a paper by Kahneman and Tversky, which shows that even educated people are prone to the conjunction fallacy. However, people make this error less when both of the events being judged are presented simultaneously, rather than separately—just like in the less-is-better effect.

Example 2 – Marketing and the less-is-better effect

Marketers sometimes have multiple versions of a product they need to test before picking a final one. Christopher Hsee, who coined the less-is-better effect, recommends relying on separate evaluations rather than joint ones, because this is more accurate to the customer’s experience.

How to avoid it

The best, most straightforward way to avoid the less-is-better effect is to always compare options before making a decision.

Related TDL article

The Representativeness Heuristic, explained

The way that people respond to the Linda problem, which mirrors the less-is-better effect, is driven by the representativeness heuristic. This page explains this bias in-depth.

Sources

  1. Tversky, A., & Kahneman, D. (1981). The framing of decisions and the psychology of choice. science211(4481), 453-458.
  2. Hsee, C. K. (1998). Less is better: When low‐value options are valued more highly than high‐value options. Journal of Behavioral Decision Making11(2), 107-121.
  3. Hsee, C. K. (1996). The evaluability hypothesis: An explanation for preference reversals between joint and separate evaluations of alternatives. Organizational behavior and human decision processes67(3), 247-257.
  4. Kahneman, D. (2011). Thinking, fast and slow. Macmillan.
  5. Medvec, V. H., Madey, S. F., & Gilovich, T. (1995). When less is more: counterfactual thinking and satisfaction among Olympic medalists. Journal of personality and social psychology69(4), 603.

About the Authors

Dan Pilat's portrait

Dan Pilat

Dan is a Co-Founder and Managing Director at The Decision Lab. He is a bestselling author of Intention - a book he wrote with Wiley on the mindful application of behavioral science in organizations. Dan has a background in organizational decision making, with a BComm in Decision & Information Systems from McGill University. He has worked on enterprise-level behavioral architecture at TD Securities and BMO Capital Markets, where he advised management on the implementation of systems processing billions of dollars per week. Driven by an appetite for the latest in technology, Dan created a course on business intelligence and lectured at McGill University, and has applied behavioral science to topics such as augmented and virtual reality.

Sekoul Krastev's portrait

Dr. Sekoul Krastev

Sekoul is a Co-Founder and Managing Director at The Decision Lab. He is a bestselling author of Intention - a book he wrote with Wiley on the mindful application of behavioral science in organizations. A decision scientist with a PhD in Decision Neuroscience from McGill University, Sekoul's work has been featured in peer-reviewed journals and has been presented at conferences around the world. Sekoul previously advised management on innovation and engagement strategy at The Boston Consulting Group as well as on online media strategy at Google. He has a deep interest in the applications of behavioral science to new technology and has published on these topics in places such as the Huffington Post and Strategy & Business.

Notes illustration

Eager to learn about how behavioral science can help your organization?