Why do we give false survey responses?

The 

Response Bias

, explained.
Bias

What is Response Bias?

The response bias refers to our tendency to provide inaccurate, or even false, answers to self-report questions, such as those asked on surveys or in structured interviews.

Where this bias occurs

Researchers who rely on participant self-report methods for data collection are faced with the challenge of structuring questionnaires in a way that increases the likelihood of respondents answering honestly. Take, for example, a researcher investigating alcohol consumption on college campuses through a survey administered to the student population. In this case, a major concern would be ensuring that the survey is neutral and non-judgmental in tone. If the survey comes across as disapproving of heavy alcohol consumption, respondents may be more likely to underreport their drinking, leading to biased survey results.

Debias Your Organization

Most of us work & live in environments that aren’t optimized for solid decision-making. We work with organizations of all kinds to identify sources of cognitive bias & develop tailored solutions.

Learn about our work

Individual effects

When this bias occurs, we come up with an answer based on external factors, such as societal norms or what we think the researcher wants to hear. This prevents us from taking time to self-reflect and think about how the topic being assessed is actually relevant to us. Not only is this a missed opportunity for critical thinking about oneself and one’s actions, but, in the case of research, it results in the provision of inaccurate data.

Systemic effects

Researchers need to proceed with caution when designing surveys or structured interviews in order to minimize the likelihood of respondents committing response bias. If they fail to do so, this systematic error could be detrimental to the entire study. Instead of progressing knowledge, biased results can lead researchers to draw inaccurate conclusions, which can have wide implications. Research is expensive to conduct and the questions under investigation tend to be of importance. For these reasons, tremendous effort is required in research design to ensure that all findings are as accurate as possible.

Why it happens

Response bias can occur for a variety of reasons. To categorize the possible causes, different forms of response bias have been defined.

Social desirability bias

First is social desirability bias, which refers to when sensitive questions are answered not with the truth, but with a response that conforms to societal norms. While there’s no real “right” answer to the survey question, social expectations may have deemed one viewpoint more acceptable than the other. In order to conform with what we feel is the appropriate stance, we tend to under- or over-report our own position. 

Demand characteristics

Second, are demand characteristics. This is when we attempt to predict how the researcher wants us to answer, and adjust our survey responses to align with that. Simply being part of a study can influence the way we respond. Anything from our interactions with the researcher to the extent of our knowledge about the research topic can have an effect on our answers. This is why it’s such a challenge for the principal investigator to design a study that eliminates, or at least minimizes, this bias.

Acquiescence bias

Third, is acquiescence bias, which is the tendency to agree with all “Yes/No” or “Agree/Disagree” questions. This may occur because we are striving to please the researcher, or, as posited by Cronbach,1 because we are motivated to call to mind information that supports the given statement. He suggests that we selectively focus on information that agrees with the statement, and unconsciously ignore any memories that contradict it.

Extreme responding 

A final example of a type of response bias is extreme responding. It’s commonly seen in surveys that use Likert scales - a type of scaled response format with several possible responses ranging from the most negative to the most positive. Responses are biased when respondents select the extremity responses almost exclusively. That is to say, if the Likert scale ranges from 1 to 7, they only ever answer 1 or 7. This can happen when respondents are disinterested and don’t feel like taking the time to actively consider the options. Other times, it happens because demand characteristics have led the participant to believe that the researcher desires a certain response.

Why it is important

In order to conduct well-designed research and obtain the most accurate results possible, academics must have a comprehensive understanding of response bias. However, it’s not just researchers who need to understand this effect. Most of us have, or will go onto, participate in research of some kind, even if it’s as simple as filling out a quick online survey. By being aware of this bias, we can work on being more critical and honest in answering these kinds of questions, instead of responding automatically.

How to avoid it

By knowing about response bias and answering surveys and structured interviews actively, instead of passively, respondents can help researchers by providing more accurate information. However, when it comes to reducing the effects of this bias, the onus is on the creator of the questionnaire.

Wording is of particular importance when it comes to combating response bias. Leading questions can prompt survey-takers to respond in a certain way, even if it’s not how they really feel. For example, in a customer-satisfaction survey a question like “Did you find our customer service satisfactory?” subtly leans towards a more favorable response, whereas asking the respondent to “Rate your customer service experience” is more neutral.

Emphasizing the anonymity of the survey can help reduce social desirability bias, as people feel more comfortable answering sensitive questions honestly when their names aren’t attached to their answers. Utilizing a professional, non-judgemental tone is also important for this.

To avoid bias from demand characteristics, participants should be given as little information about the study as possible. Say, for example, you’re a psychologist at a university, investigating gender differences in shopping habits . A question on this survey might be something like: “How often do you go clothing shopping?”, with the following answer choices: “At least once a week”, “At least once a month”, “At least once a year”, and “Every few years”. If your participants figure out what you’re researching they may answer differently than they otherwise would have. 

Many of us resort to response bias, specifically extreme responding and acquiescence bias, when we get bored. This is because it’s easier than putting in the effort to actively consider each statement. For that reason, it’s important to factor in length when designing a survey or structured interview. If it’s too long, participants may zone out and respond less carefully, thereby giving less accurate information.

How it all started

Interestingly, the response bias wasn’t originally considered much of an issue. Gove and Geerken claimed that “response bias variables act largely as random noise," which doesn’t significantly affect the results as long as the sample size is big enough.2 They weren’t the only researchers to try and quell concerns over this bias but, more recently, it has become increasingly recognized as a genuine source of concern in academia. This is due to the overwhelming amount of research that has come out supporting the presence of an effect, for example, Furnham’s literature review.3 Knäuper and Wittchen’s 1994 study also demonstrates this bias, specifically, in the context of standardized diagnostic interviews administered to the elderly, who engage in a form of response bias by tending to attribute symptoms of depression to physical conditions.4

Example 1 - Depression

An emotion-specific response bias has been observed in patients with major depression, as evidenced by a study conducted by Surguladze et al. in 2004.5 The results of this study showed that patients with major depression had greater difficulty discriminating between happy and sad faces presented for a short duration of time than did the healthy control group. This discrimination impairment wasn’t observed when facial expressions were presented for a longer duration. On these longer trials, patients with major depression exhibited a response bias towards sad faces. It’s important to note that discrimination impairment and response bias did not occur simultaneously, so it’s clear that one can’t be attributed to the other.

Understanding this emotion-specific response bias allows for further insight into the mechanisms of major depression, particularly into the impairments in social functioning associated with the disorder. It’s been suggested that the bias towards sad stimuli may cause people with major depression to interpret situations more negatively.6

Researchers working outside of mental health should be aware of this bias as well, so that they know to screen for major depression should their survey include questions pertaining to emotion or interpersonal interactions. 

Example 2 - Social media

Social media is a useful tool, thanks to both its versatility and its wide reach. However, while most of the surveys used in academic studies have gone through rigorous scrutiny and have been peer-reviewed by experts in the field, this isn’t always the case with social media polls. 

Many businesses will administer surveys over social media to gauge their audience’s views on a certain matter. There are many reasons why the results of these kinds of polls should be taken with a grain of salt - for one thing, the sample is most certainly not random. In these situations, response bias is also likely at play.

Take, for example, a poll conducted by a makeup company, where the question is “How much did you love our new mascara?”, with the possible answers: “So much!” and “Not at all.” This is a leading question, which essentially asks for a positive response. Additionally, respondents may be prone to commit acquiescence bias in order to please the company, since there’s no option for a middle-ground response. Even if results of this survey are overwhelmingly positive, you might not want to immediately splurge on the mascara. The positive response could have more to do with the structure of the survey than with the quality of the product.

Summary

What it is

Response bias describes our tendency to provide inaccurate responses on self-report measures.

Why it happens

Social pressures, disinterest in the survey, and eagerness to please the researcher are all possible causes of response bias. Furthermore, the design of the survey itself can prompt participants to adjust their responses. 

Example 1 - Major depression

People with major depression are more likely to identify a given facial expression as sad than people without major depression. This can impact daily interpersonal interactions, in addition to influencing responses on surveys related to emotion-processing.

Example 2 - Interpreting social media surveys

Surveys that aren’t designed to prevent response bias provide misleading results. For this reason, social media surveys, which can be created by anyone, shouldn’t be taken at face value.

How to avoid it

When filling out a survey, actively considering each response, instead of answering automatically, can decrease the amount to which we engage in response bias. Anyone conducting research should take care to craft surveys that are anonymous, that are neutral in tone, that provide sufficient answer options, and that don’t give away too much about the research question.

Related Articles

How Does Society Influence One’s Behavior?

This article evaluates the ways in which our behaviors are molded by societal influences. The author breaks down the different influences our peers have on our actions, which is pertinent when it comes to exploring social desirability bias. 

The Framing Effect

The framing effect describes how the way factors such as wording, setting, and situation influence our choices and opinions. The way survey questions are framed can lead to response bias, by causing respondents to over- or under-report their true viewpoint. This article elaborates on the implications of the framing effect, which are powerful and widespread.

Sources

  1. Cronbach, L. J. (1942). Studies of acquiescence as a factor in the true-false test. Journal of Educational Psychology, 33(6), 401–415. doi:10.1037/h0054677
  2. Gove, W. R., and Geerken, M. R. (1977). "Response bias in surveys of mental health: An empirical investigation". American Journal of Sociology. 82(6), 1289–1317. doi:10.1086/226466
  3. Furnham, Adrian (1986). "Response bias, social desirability and dissimulation". Personality and Individual Differences. 7(3), 385–400. doi:10.1016/0191-8869(86)90014-0
  4. Knäuper, Bärbel, and Wittchen, Hans-Ulrich (1994). "Diagnosing major depression in the elderly: Evidence for response bias in standardized diagnostic interviews?". Journal of Psychiatric Research. 28(2), 147–164. doi:10.1016/0022-3956(94)90026-4
  5. Surguladze, S. A., Young, A. W., Senior, C., Brébion, G., Travis, M. J., & Phillips, M. L. (2004). Recognition Accuracy and Response Bias to Happy and Sad Facial Expressions in Patients With Major Depression. Neuropsychology, 18(2), 212–218. doi:10.1037/0894-4105.18.2.212
  6. See 5

About the Authors

Dan Pilat's portrait

Dan Pilat

Dan is a Co-Founder and Managing Director at The Decision Lab. He is a bestselling author of Intention - a book he wrote with Wiley on the mindful application of behavioral science in organizations. Dan has a background in organizational decision making, with a BComm in Decision & Information Systems from McGill University. He has worked on enterprise-level behavioral architecture at TD Securities and BMO Capital Markets, where he advised management on the implementation of systems processing billions of dollars per week. Driven by an appetite for the latest in technology, Dan created a course on business intelligence and lectured at McGill University, and has applied behavioral science to topics such as augmented and virtual reality.

Sekoul Krastev's portrait

Dr. Sekoul Krastev

Sekoul is a Co-Founder and Managing Director at The Decision Lab. He is a bestselling author of Intention - a book he wrote with Wiley on the mindful application of behavioral science in organizations. A decision scientist with a PhD in Decision Neuroscience from McGill University, Sekoul's work has been featured in peer-reviewed journals and has been presented at conferences around the world. Sekoul previously advised management on innovation and engagement strategy at The Boston Consulting Group as well as on online media strategy at Google. He has a deep interest in the applications of behavioral science to new technology and has published on these topics in places such as the Huffington Post and Strategy & Business.

Notes illustration

Eager to learn about how behavioral science can help your organization?