The less-is-better effect was coined by Christopher Hsee, a behavioral scientist at the University of Chicago. In one of his experiments on this bias, Hsee had participants imagine that they were spending a summer’s day at the beach and that there was an ice cream vendor working nearby. One group of participants were told that the vendor was serving 8 oz scoops of ice cream in 10 oz cups, while the other group was told that the scoops were 7 oz served in a 5 oz cup. After being given this information, people were asked to write down how much they would be willing to pay for a serving.
The results showed that, counterintuitively, people were willing to pay more for the smaller, overfilled cup of ice cream than they were for the larger, underfilled one. However, if people were told to imagine that there were two vendors on the beach, selling both of the above options, this effect disappeared.2
The ice cream example illustrates a couple of key points about the less-is-better effect. For one, when people decide to go for the lesser option, they do so because in context, it is somehow more appealing. Although the larger ice cream scoop is objectively a better option than the small one (assuming that you’re trying to maximize your ice cream consumption, like any reasonable person), when it’s served in a cup that’s too large, people feel like they’re getting ripped off. In contrast, even though the small scoop amounts to less ice cream, when it’s stuffed into an even smaller cup, it looks like you’ve been given a generous portion.
The second thing to underline here is that the less-is-better effect involves a kind of preference reversal (PR): When options are presented together instead of in isolation, people no longer go for the inferior pick. This interesting finding is a hint that we approach decisions differently depending on how many options are made available to us, and how those options are presented. Sure enough, research has turned up a few different reasons why our cognitive approach differs across these contexts.
Our perception of the world is context-dependent
The less-is-better effect is one of many examples of how our judgment can be swayed by contextual or environmental factors. Although we tend to think of ourselves as rational decision makers, time and time again, research has shown that the way we process information is highly dependent on how it’s presented to us.
One of the most famous demonstrations of this fact, coined by the highly influential behavioral economists Daniel Kahneman and Amos Tversky, is known as framing. In a famous study, participants were told to imagine that the US was preparing for the outbreak of a new disease that was expected to kill 600 people. They were then told to pick between two options:
- If Program A is adopted, 200 people will be saved.
- If Program B is adopted, there is a one-third probability that 600 people will be saved and a two-thirds probability that no people will be saved.
Given these two options, 72% picked Program A, and 28% picked Program B. However, to another group of participants, Kahneman and Tversky presented two different options:
- If Program C is adopted, 400 people will die.
- If Program D is adopted, there is a one-third probability that nobody will die and a two-thirds probability that 600 people will die.1
The trick here is that these options are actually the same as the first two—Programs A and C both result in 200 survivors and 400 dead, and Programs B and D both carry the same chances of success and failure. And yet, the change in wording led to very different results: only 22% picked Program C, and 78% picked Program D.
The different responses stem from how the choices were framed: in terms of gains (people saved) or losses (people not saved). When options are framed in terms of gains, people become risk-averse, choosing Program A because it is a safer bet than Program B. However, when framed in terms of losses, people become more risk-seeking, choosing to gamble on Program D rather than accept the guaranteed loss of Program C.
The point here is that people often respond to the same piece of information very differently, depending on how it’s put to them. This is helpful for understanding the less-is-better effect, but isn’t quite sufficient, since it doesn’t explain why this effect only happens when we evaluate our options separately. Why do people’s preferences reverse when they consider their choices altogether?
Some attributes are harder to judge
One reason for the less-is-better effect is known as the evaluability hypothesis. The crux of this theory is that we have a harder time evaluating some attributes than others,3 and when we’re trying to judge objects in isolation, our impressions are much more heavily influenced by features that are easily evaluated.
The main reason that an attribute might be difficult to evaluate is that we lack distribution information for it: we don’t know what its average value is, or what its highest and lowest possible values are. For example, if you were trying to decide whether or not to buy a specific kind of car, and the salesperson told you the model’s horsepower, that information wouldn’t be very useful unless you knew the horsepower of the average car, or how big a difference there is in horsepower between a sports car and a lower-performing vehicle.
Often, we will try to simplify the decision making process by thinking of objects in relative terms. According to norm theory, when it’s difficult to evaluate an object in isolation, people tend to think about other objects in the same category to use as a reference point. For example, when you receive a hardcover book from your friend, the fact that it cost $35 only becomes meaningful when it’s compared to the price of other books. In order to judge this gift, you would probably (automatically) think of the “average” book, and use that as a reference point. This lets you judge the relative position of the book in its own category, a much easier task.2 But since you’re now relying on the item’s relative standing instead of its absolute value, this can lead to a biased judgment. For instance, even though $45 might be inexpensive for a coat, it’s still an objectively more generous gift than a $35 book.
Some features stand out more than others
Another, somewhat simpler factor that drives the less-is-better effect is salience: how much certain aspects of an object stand out. Salience is a powerful determinant of how we perceive the world, biasing our attention towards more flashy, attention-grabbing information. We also tend to rely on highly salient information to guide our decision making, more so than we do on mundane details.
In many demonstrations of the less-is-better effect, people’s impressions of the objectively superior option are often handicapped by a highly salient, negative feature. One example