Survey Design

What is Survey Design?

Survey design is the process of creating surveys to gather valuable insights from a specific group of people. Surveys can range from a few concise questions to extensive questionnaires, depending on the research goals. Given that the questions themselves set the stage for the research, it is important that they are written out carefully and thoughtfully to ensure that the survey draws out accurate and relevant information from the respondents.

The Basic Idea

You have probably filled out a survey sometime in your life, whether it be feedback surveys, political opinion surveys, or community surveys. They are structured with questions designed to elicit opinions, behaviors, and experiences from the respondents. So, it should come as no surprise that one of the most, if not the most, important aspects of the survey design process is coming up with questions that answer your research question. 

Designing a survey is a multi-stage process that demands careful attention to detail. Effective survey design and implementation can provide critical insights into the minds of a target population. Years of research have revealed both the art and the science of crafting effective survey questions. But what does survey design entail?.1

Key Steps in Survey Design

1. Determine goals: The first step of any successful research design is to pick the topic you wish to study. Surveys are no different. Knowing the purpose of your survey will focus its construction, ensuring you gather the right information.

2. Find your target population and survey methods: Using your survey aims, it is time to decide who you’re going to ask and what the appropriate method is. When making these decisions, consider the sort of data you want to collect and the research timeframe.

3. Design your questions: Each question should target a facet of your study’s aims. To keep your respondents engaged, avoid repetitive questions and ensure the survey is not overly lengthy. It’s also a good practice to mix up your question types (i.e. open- or closed-questions, multiple-choice questions, dichotomous questions) to gain a more holistic understanding of the respondents. 

4. Administer your survey to your target population: Choose the appropriate method to reach your target population, whether online, in-person, or via phone. Follow standardized procedures to ensure consistency.

5. Analyze the data: Collect and analyze your data based on its type. Quantitative data typically uses inferential statistics to make sense of the numbers while qualitative data are analyzed thematically. Tools like SPSS, R, and Excel are useful here.

6. Draw conclusions: Based on the data you’ve collected and analyzed, come up with a conclusion that references your study’s goals. 

An effective survey design ensures that the data collected is reliable, valid, and actionable, enabling informed and confident decision-making based on the survey results.2

“The goal of a good survey is to get reliable, valid data that can be generalized to the population of interest.


Floyd Fowler, a Senior Research Fellow at the Center for Survey Research at the University of Massachusetts Boston

Key Terms

Cross-sectional surveys: A survey that is carried out at a single point in time to gather data about current trends or opinions in the target population.

Longitudinal surveys: A survey that is administered more than once to the target population over a predetermined period of time to allow researchers to observe change.

Descriptive surveys: A survey that focuses on beliefs, attitudes, or behaviors to come to an in-depth understanding of the qualities of the target population.

Analytical surveys: A survey conducted to understand the relationship between variables to explain why certain trends or patterns take place. The data is analyzed using statistics.

Exploratory surveys: A survey carried out on a new topic to collect preliminary information. The data is then used to help define a problem and suggest hypotheses.  

Explanatory surveys: A survey designed to explain a phenomenon. It provides additional context for prior research, helping to better understand the causes or behaviors involved in the occurrence of the phenomenon.

Probability Sampling: A sampling method used in survey research where a sample is randomly selected from a given population in a way that each person has a non-zero chance of being included. This method ensures that the sample accurately represents the population, making the results of the survey generalizable to the public. 

Dichotomous Questions: A type of survey question where respondents only have two possible answers which are typically “True” or “False,” or “Yes” or “No.” 

Multiple-Choice Questions: A type of survey question where respondents are given a set of predetermined answer choices to choose from. For instance, a survey can ask for your highest level of education and give you these options to choose from: a) High School, b) Undergraduate Degree, c) Graduate Degree, d) PhD. 

Likert Scale: A psychometric scale commonly used in surveys to measure attitudes, opinions, or behaviors. The respondents are given a statement and are asked to give their answer on the scale. For example, a survey can ask someone how happy they are from 1-10 and the respondent would mark their answer on 5 if they feel neither happy or sad.

History

Survey research dates back to the late 19th century, beginning with census taking as well as psychological and intelligence testing. However, modern surveys did not develop directly from these predecessors. Instead, they evolved from public opinion polls in the mid-1930s, which were designed to provide information for American market researchers.3

The history of survey design can be divided into three distinct eras. The first era, dating between 1930-1960, statisticians laid out the foundations for data collection design and experimented with ways to extract statistical information from the survey results. Statisticians also developed and tested probability sampling as the standard sampling method for survey research.

Despite the major strides being made during the infancy of survey design, the number of surveys being conducted was relatively small. This is mostly attributed to the fact that the surveys, at the time, were administered through face-to-face interviews, mailed surveys, and eventually telephone surveys. It wasn’t as convenient to conduct survey research.4

We consider the 1960s to 1990s the second era of survey design—marked by rapid expansion. The increased accessibility of telephones as a communication medium made it easier for surveyors to obtain a probability sample from telephone numbers. Computers were also leveraged to process individual survey data and to perform statistical computations on the results. But, the rapid expansion of survey design is not only thanks to the development of technology. In the 1960s, the U.S. federal government poured more funding into social science programs. This meant that more and more surveys were being carried out, fueling the growth of survey research centers around the U.S..4

We are now in the third era, and while there’s been a steep decline in telephone surveys, we have also seen a rise in internet-administered surveys. The internet also made survey research much cheaper and more convenient to conduct. There are now numerous online survey tools that help people either create a survey from scratch or build one from a template. Indeed, with the booming accessibility of AI technology, its usage as a tool in survey design has allowed researchers to create surveys optimized to generate the most data possible.4

People

Jean M. Converse: The former director of the Detroit Area Study at the University of Michigan and the author of numerous books related to survey research, Converse made significant contributions to understanding the history and methodology of survey research. Her most notable work, Survey Research in the United States: Roots and Emergence 1890-1960, provided extensive coverage of the US history of survey research. Converse was also an expert in interviewing techniques.5

Floyd J. Fowler: A Senior Research Fellow at the Center for Survey Research at the University of Massachusetts Boston, he is best known for his work on improving methodology and question design, particularly in terms of achieving high reliability and validity. His key works include Survey Research Methods and Improving Survey Questions: Design and Evaluation. In addition to his survey work, Fowler also focused on health studies.6

Robert M. Groves: The director of the United States Census Bureau from 2009 to 2012, Grover is an expert in survey methodology. Much of his work focuses on understanding and reducing survey error as outlined in his books, Survey Errors and Costs and Survey Methodology.7

Consequences

Designing a survey is more than just creating questions to ask respondents. Years of research have established the science in the field of survey design. A notable consequence of increased survey research is understanding what goes into writing good survey questions.

Open- and Closed-ended Questions

These two questions may ask the same thing but yield different answers. In a public opinion poll conducted after the 2008 presidential election, people responded differently to two versions of the same question about the political issue that drove their vote. The closed-ended question resulted in 58% of people selecting “the economy” while the open-ended question resulting in 35% naming “the economy”.8 What this reveals is the importance of keeping in mind the question type. While both types have their own merits and downfalls, the choice the survey designer makes depends on the context and goals of the survey.

Question-Wording

The choice of words and phrasing of a question influences its intent. For surveys, the question must be uniformly understood across all respondents. If not, the survey will suffer issues with validity and reliability. Research shows that survey designers should (a) use clear, simple, and specific language, (b) ask one question at a time, and (c) be wary of using potentially sensitive language. All these wording choices have an overall impact on the way a question is understood and answered.

Question Order

Years of research have demonstrated that the question order can impact responses wherein earlier questions may provide context for later questions.9 For instance, putting a closed-ended question before an open-ended question may encourage respondents to include the choice items of the closed-ended question in their open answer. This is problematic because researchers are not gaining information about the respondents’ own opinions but rather a reaffirmation of what the researcher thinks is important.

It is thanks to the abundance of research conducted on survey design that we can better understand the qualities of a good survey which in turn improves the overall field as a result.

Question Content

To keep respondents engaged with the survey, the questions should avoid being repetitive. After all, who likes to answer a survey that asks the same question five different times. That being said, to make sure that respondents are putting effort into their answers, similar questions can be put together. 

Doing this achieves three things. First, it is a form of reliability testing that indicates consistency in the survey responses. Second, it acts as a validity check. By rephrasing or slightly altering a question, researchers can confirm that the respondents’ answers are based on their true opinions or experiences, not on misunderstandings or misinterpretations of a single question. Finally, it allows researchers to cross-validate responses. If different questions yield consistent answers, it strengthens the confidence in the survey data’s accuracy. 

Controversies

What impacts the survey research field the most is the inevitable effect of bias. Let’s take a look at a couple.

  1. Order effect: The question order in a survey may unintentionally influence respondents’ answers. For instance, earlier questions may provide context for later questions which impacts the authenticity of the answers given. To reduce the effect, survey designers make sure to randomize the order of the questions, however, this won’t eliminate the bias altogether.
  2. Response bias: This bias describes the tendency for respondents to answer surveys inaccurately because of external influences like societal norms or the desire to please the researcher. For example, in a survey about healthy habits, a respondent may over-report the number of health activities they engage in just to be viewed positively. The data researchers end up working with is ultimately skewed and inaccurate about their target population.
  3. Demand characteristics: This is when there are cues in the survey design that make the research objects obvious to the respondents. The problem with this is that participants may end up answering the survey in a way that aligns with the research goals. A consequence is biased survey findings that simply confirm that the researcher is already thinking. To mitigate this bias, survey designers should try to stick with neutral wording and indirect questioning to avoid leading respondents. 

Case Study

Lululemon’s Customer Satisfaction Survey

Lululemon is one of the biggest athletic apparel retailers in the world. Founded back in 1998, the company has expanded its catalog to sell athletic wear, lifestyle wear, and personal care products. Despite how massive the company is, Lululemon sends a customer satisfaction survey to each of their customers just a few days after their online order arrives at their doorstep. The survey asks the customer to rank the product out of five stars and to provide a written review of the product. But of course it isn’t only lululemon who does this, customer satisfaction surveys help businesses retain a loyal customer base and provide valuable information about which products shine versus what’s falling short. By sending out individual customer satisfaction surveys, Lululemon demonstrates to their customers that they care about their products’ quality and their customers’ happiness with the brand.

The Net Promoter Score (NPS) Survey

The NPS is widely regarded as the standard metric for assessing customer experience. The survey measures customer loyalty by asking one simple question: “On a scale of 0 to 10, how likely are you to recommend our product/service to a friend or a colleague?” Depending on what the respondents rate, they fall into one of three categories: Promoters (9-10), Passives (7-8), and Detractors (0-6). Companies use this data to calculate their NPS and gather insights into customer satisfaction and areas for brand improvement.

Related TDL Content

Response Bias

Response bias describes the tendency to provide inaccurate or false answers to surveys due to external influences like societal norms or the perceived expectations of the researcher. It impacts the overall quality of survey results by creating flawed data and misinformed conclusions. There are many ways to go about reducing or avoiding the response bias.

Beyond the Checkbox: Redesigning Surveys for Reliable Behavioral Insights

As much as surveys provide us with valuable information about a target population, a major limitation is capturing reliable behavioral insights due to the influence of response biases. There is a need for innovative survey designs that take into account the complexities of human behavior. Experts recommend three methods to reduce response bias: best-worst scaling, task-based scenarios, and game-based designs.  

References

  1. Mills, J. G. (2024, March 16). Survey Research Design, A Simple Introduction. SuperSurvey. https://www.supersurvey.com/Research#:~:text=Survey%20design%20is%20a%20critical,opinions%20within%20a%20specific%20population
  2. [see 1]
  3. Ornstein, M. (2013). The Invention of Survey Research. In A Companion to Survey Research. SAGE Publications Ltd. https://us.sagepub.com/sites/default/files/upm-assets/54400_book_item_54400.pdf
  4. Groves, R. M. (2011). Three Eras of Survey Research. Public Opinion Quarterly, 75(5), 861–871. https://doi.org/10.1093/poq/nfr057
  5. Institute of Social Research | University of Michigan. (2018). Social scientist and historian of survey research Jean Converse dies at 90 | Institute for Social Research. University of Michigan. https://isr.umich.edu/news-events/news-releases/social-scientist-and-historian-of-survey-research-jean-converse-dies-at-90/
  6. Wikipedia Contributors. (n.d.). Floyd J. Fowler Jr. Wikipedia; Wikimedia Foundation. Retrieved August 7, 2024, from https://en.wikipedia.org/wiki/Floyd_J._Fowler_Jr.
  7. Wikipedia Contributors. (n.d.-b). Robert Groves. Wikipedia; Wikimedia Foundation. Retrieved August 7, 2024, from https://en.wikipedia.org/wiki/Robert_Groves
  8. Pew Research Center. (2021, May 26). Writing Survey Questions. Pew Research Center. https://www.pewresearch.org/writing-survey-questions/
  9. [see 8]

About the Author

A person in a graduation gown smiles, standing in front of a pillar with a partially blurred outdoor setting in the background.

Samantha Lau

Samantha graduated from the University of Toronto, majoring in psychology and criminology. During her undergraduate degree, she studied how mindfulness meditation impacted human memory which sparked her interest in cognition. Samantha is curious about the way behavioural science impacts design, particularly in the UX field. As she works to make behavioural science more accessible with The Decision Lab, she is preparing to start her Master of Behavioural and Decision Sciences degree at the University of Pennsylvania. In her free time, you can catch her at a concert or in a dance studio.

About us

We are the leading applied research & innovation consultancy

Our insights are leveraged by the most ambitious organizations

Image

I was blown away with their application and translation of behavioral science into practice. They took a very complex ecosystem and created a series of interventions using an innovative mix of the latest research and creative client co-creation. I was so impressed at the final product they created, which was hugely comprehensive despite the large scope of the client being of the world's most far-reaching and best known consumer brands. I'm excited to see what we can create together in the future.

Heather McKee

BEHAVIORAL SCIENTIST

GLOBAL COFFEEHOUSE CHAIN PROJECT

OUR CLIENT SUCCESS

$0M

Annual Revenue Increase

By launching a behavioral science practice at the core of the organization, we helped one of the largest insurers in North America realize $30M increase in annual revenue.

0%

Increase in Monthly Users

By redesigning North America's first national digital platform for mental health, we achieved a 52% lift in monthly users and an 83% improvement on clinical assessment.

0%

Reduction In Design Time

By designing a new process and getting buy-in from the C-Suite team, we helped one of the largest smartphone manufacturers in the world reduce software design time by 75%.

0%

Reduction in Client Drop-Off

By implementing targeted nudges based on proactive interventions, we reduced drop-off rates for 450,000 clients belonging to USA's oldest debt consolidation organizations by 46%

Read Next

Notes illustration

Eager to learn about how behavioral science can help your organization?