A person is placing a letter into a blue United States Postal Service mailbox on a suburban street, with trees and a U.S. flag visible in the background.

TDL Brief: What’s next for polling?

read time - icon

0 min read

Nov 09, 2020

A cartoon illustrating polling bias. On the left, a group of stick figures represents reality, with some having pizza-shaped heads (representing people who like pizza) and others with salad-shaped heads (representing people who like salad). As the figures move right towards a sample data section, only salad-lovers are selected. A person at 'Mega Poll Genius Co' looks at the biased data and concludes, 'Wow, we need to start making more salads!' The image humorously highlights how biased sampling can lead to misleading conclusions.

On Tuesday night, Nov. 3, as election results started to roll in, it became clear that the U.S. presidential race was going to be closer than many polls and models had previously suggested.

Media outlets concluded quickly that there had been a gigantic polling misfire, even worse than in 2016. The Washington Post published an article entitled “The polling industry can’t sweep its failure under the rug,” and The Conversation wrote that the 2020 election was “An embarrassing failure for election pollsters,” to name a few.

“To all the pollsters out there, you have no idea what you’re doing.”

— Lindsey Graham, after rather easily winning re-election to the Senate despite pre-election polls indicating a much closer race.

Pollsters and forecasters argue that it’s premature to diagnose the polls as wildly incorrect. In many states—including battleground states such as Arizona and Georgia—the polls turned out to be quite accurate. And to be fair, FiveThirtyEight’s final projections gave Biden less than a 1-in-3 chance of a landslide. Projections for the popular vote are likely to be pretty accurate. 

But, there were definitely significant misses at the state and district level—for Wisconsin, Michigan, Ohio, and Florida, in particular. Polling for senate races also projected more competitive races in Iowa, Maine, and South Carolina than the ones we actually saw.

And when you look at the polling misses in both 2016 and 2020, the errors don’t seem random: they tend to come out in favor of the Republican Party. That is, there aren’t really any states where polls suggested that the Republican candidate would win, which were in fact won by the Democrat. The polling errors seem to go only in one direction. Moreover, the polls seem to miss not only in the same direction nationwide, but in the same states.

In light of this, TDL has compiled a list of hypotheses that may shed light on what, if anything, went wrong, and where to go from here.

Tom Spiegler, TDL Managing Director

1. The “shy voter” hypothesis.

By: The Conversation, “Voters’ embarrassment and fear of social stigma messed with pollsters’ predictions” (November 2016)

This was one of the theories floated after the 2016 U.S. election, and one that was mostly dismissed by pollsters at the time. However, many are paying more attention to the “shy voter” hypothesis this time around.

The argument goes like this: People are typically proud to vote for the candidate of their choice. Under normal conditions, when citizens are approached by pollsters about their voting intentions, the assumption is they’ll respond truthfully.

However, in certain circumstances, societal pressures may cause a voter to withhold their true preference from pollsters. If the norm in a voter’s social milieu is to vote for Candidate X, a vote for Candidate Y would defy that social norm. The less anonymous the polling method is, the more social pressures and norms can lead the “shy voter” to lie about their political preferences.

There is some research to support the idea that even actions carried out in private can lead to embarrassment. For instance, buying Viagra, even in the privacy of one’s home, can lead some people to feel ashamed, if they believe that unenhanced sexual performance is the social norm.

The “shy voter” hypothesis is particularly apt for polls conducted over the phone, which is a less anonymous venue than an online survey. Because any embarrassment is likely to be more acute over the phone, these kinds of polling methods may disproportionately run the risk of undercounting support for a particular candidate.

2. Distrusting voters may self-select out of polls.

By: Wired, “So How Wrong Were the Polls This Year, Really?” (November 2020)

Another hypothesis is that those with a distrust of the media and political institutions are less likely to answer a poll in the first place.

Take a pre-election poll with 500 responses: 200 for Candidate X and 300 for Candidate Y.

500 people is a reasonably large sample size, and looking at this poll, you might be confident that Candidate Y has a comfortable lead. But response rates are often 20% or lower. So now picture 10,000 people, 300 of whom called their vote for Candidate Y, 200 for Candidate X, and 9,500 who never responded. How confident are you feeling now?

“A big worry in political polling is that there are a group of people—distrustful of the media, academia, science, etc.—who do not want to engage with pollsters and survey research,” said Neil Malhotra, a professor of political economy at Stanford.

Pollsters try to adjust for those shortcomings with various methodologies, but it’s impossible to verify their accuracy.

3. Confirmation bias among pollsters and forecasters.

By: Statistical Modeling, Causal Inference, and Social Science Forum (Columbia University) (November 2020)

The consensus after the 2016 U.S. election was that the pollsters got it largely wrong. Going into 2020, pollsters were confident that the problems of 2016 had been accounted for, and that the 2020 polls were more likely to be right.

But with the 2020 election in the rear-view mirror, it seems as though the polling error this time around looked similar to that from 2016, prompting the question: did pollsters repeat the same mistakes? And if so, why and how?

One explanation for the persisting error is confirmation bias.

Confirmation bias is a cognitive shortcut that describes our underlying tendency to notice, focus on, and give greater credence to evidence that fits with our existing beliefs.

Pollsters are human, and are likely to have their own ideological and political preferences, which might lead pollsters to overlook possible errors in the polling if it skews toward the candidate or party that they prefer.

For example, if you favor Candidate X in the upcoming election, and the polls are coming out in a way that is consistent with your beliefs of how the election should go, you will be less likely to question these polling results than if you expected Candidate Y to be doing well.

4. Polls are frequently misinterpreted and misreported.

By: Laura Santhanam, “How to read the polls in 2020 and avoid the mistakes of 2016” (October 2020)

One hypothesis as to why the polls seem wrong is because we, as consumers, expect too much from them. There is a tendency among the media and general public to see polls as crystal balls, rather than surveys that represent a range of possible outcomes, affected by a number of variables that pollsters cannot control.

According to Courtney Kennedy, who directs survey research at Pew Research Center, polls are “not designed to predict elections”—even though many of us tend to perceive them that way. When we look to pollsters as “election oracles,” we are setting ourselves up for disappointment when their forecasts diverge from reality.

It is human nature to want to foresee and predict the future. People try to predict the future in other ways, like the stock market, with varying degrees of success, but clearly, predictions of human behavior can never be 100% certain. The experts may have better predictions and models than most of us “normal” people, but as we can see, expert models and analyses are never perfect either—and we should not be treating them as such.

About the Authors

A man in a blue, striped shirt smiles while standing indoors, surrounded by green plants and modern office decor.

Dan Pilat

Dan is a Co-Founder and Managing Director at The Decision Lab. He is a bestselling author of Intention - a book he wrote with Wiley on the mindful application of behavioral science in organizations. Dan has a background in organizational decision making, with a BComm in Decision & Information Systems from McGill University. He has worked on enterprise-level behavioral architecture at TD Securities and BMO Capital Markets, where he advised management on the implementation of systems processing billions of dollars per week. Driven by an appetite for the latest in technology, Dan created a course on business intelligence and lectured at McGill University, and has applied behavioral science to topics such as augmented and virtual reality.

A smiling man stands in an office, wearing a dark blazer and black shirt, with plants and glass-walled rooms in the background.

Dr. Sekoul Krastev

Sekoul is a Co-Founder and Managing Director at The Decision Lab. He is a bestselling author of Intention - a book he wrote with Wiley on the mindful application of behavioral science in organizations. A decision scientist with a PhD in Decision Neuroscience from McGill University, Sekoul's work has been featured in peer-reviewed journals and has been presented at conferences around the world. Sekoul previously advised management on innovation and engagement strategy at The Boston Consulting Group as well as on online media strategy at Google. He has a deep interest in the applications of behavioral science to new technology and has published on these topics in places such as the Huffington Post and Strategy & Business.

About us

We are the leading applied research & innovation consultancy

Our insights are leveraged by the most ambitious organizations

Image

I was blown away with their application and translation of behavioral science into practice. They took a very complex ecosystem and created a series of interventions using an innovative mix of the latest research and creative client co-creation. I was so impressed at the final product they created, which was hugely comprehensive despite the large scope of the client being of the world's most far-reaching and best known consumer brands. I'm excited to see what we can create together in the future.

Heather McKee

BEHAVIORAL SCIENTIST

GLOBAL COFFEEHOUSE CHAIN PROJECT

OUR CLIENT SUCCESS

$0M

Annual Revenue Increase

By launching a behavioral science practice at the core of the organization, we helped one of the largest insurers in North America realize $30M increase in annual revenue.

0%

Increase in Monthly Users

By redesigning North America's first national digital platform for mental health, we achieved a 52% lift in monthly users and an 83% improvement on clinical assessment.

0%

Reduction In Design Time

By designing a new process and getting buy-in from the C-Suite team, we helped one of the largest smartphone manufacturers in the world reduce software design time by 75%.

0%

Reduction in Client Drop-Off

By implementing targeted nudges based on proactive interventions, we reduced drop-off rates for 450,000 clients belonging to USA's oldest debt consolidation organizations by 46%

Read Next

a young woman with a voter in the voting booth. voting in a democracy
Insight

Is a Biased Vote Better Than No Vote?

The US Presidential Election is fast approaching, and pressure is mounting for every eligible person to cast their ballot. But these calls to vote can play into our cognitive biases

I voted USA stickers
Insight

How Facebook Increased The Number Of Votes In The 2010 US Congressional Elections

Can social media increase voter turnout? Researchers sought to answer this question by seeing if the social approval effect — which describes how the desire for social approval can influence decision-making — can impact voting behavior in the real world. As it turns out, merely seeing others' behavior can have a strong influence on our decision-making.

The image depicts a detailed diorama or model display of a Nazi rally, with miniature figures of Nazi soldiers, officials, and flags prominently featuring swastikas. The scene is filled with figures arranged in a setting that likely represents a propaganda event or military parade during the Third Reich. The flags and symbols of Nazi Germany are clearly visible throughout the scene, conveying the historical context of the time.
Insight

Beyond Irrational Politics

What can behavioral science tell us about politics? A lot, it turns out. Political polarization has intensified to the extent that we give our trust based on who says something, not what they say.

Notes illustration

Eager to learn about how behavioral science can help your organization?