sResources

Five Tips for Better Survey Building


MICHAL GLOWKA

Many people think it’s easy to create a survey. Simply write down all the questions one needs answered, right?

It may be easy to draft a simple survey, but it’s difficult to write a good survey that will yield helpful results.

What is a good survey? Even the answer to this question is complex. In this post, we’ll explore five ways to improve questionnaire design.

Make Sure Options are Exclusive and Exhaustive

When building a list of choices for your question, use the “MECE” rule. MECE, or “mutually-exclusive, collectively exhaustive” means that no option overlaps and all options collectively represent all possible outcomes.

Assuming you’re looking to run a survey on electronic devices, here’s an example of a possible survey question:

What electronic devices do you own?

  • Gaming console
  • PlayStation
  • Smartphone
  • Computer

However, this does not follow the MECE principle. People who own PlayStation may select both gaming console and PlayStation. There is also no available option for people who do not have any electronic devices. Let’s try to fix it:

What electronic devices do you own? Please select all that apply.

  • Gaming console (e.g., PlayStation Xbox, Nintendo Wii, etc.)
  • Smartphone (including iPhone)
  • Tablet (e.g., iPad, Amazon Fire, Samsung Galaxy, etc.) 
  • Computer (desktop/laptop)
  • Other (please specify)
  • I do not own any electronic devices

By adding an “other” option, we provided a choice for people who own other devices not on the list (e.g., Smartwatch, e-Reader, etc.). Similarly, adding an option at the bottom for no electronic devices allows for the possibility that a respondent does not own any. These edits allow us to have a complete set of choices for this question.

Create Balance to Avoid Bias

When creating questions, people often think about possible outcomes. While it is a good practice to consider what a respondent might say, we need to be careful it does not lead to creating “suggestive” or biased questions. Sometimes, organizations may skew a set of answers on purpose (e.g., with satisfaction level questions).

Here’s what that might look like:

How satisfied are you with our company’s services?

  • Very satisfied
  • Somewhat satisfied
  • Not satisfied

For the example above not only is the question stem biased toward satisfaction, but also the split between negative and positive options is not equal. Here’s how we’ll amend the question: 

How satisfied or dissatisfied are you with our company’s services?

  • Very satisfied
  • Somewhat satisfied
  • Neither satisfied, nor dissatisfied
  • Somewhat dissatisfied
  • Very dissatisfied

In the above example, we have balanced the question stem as well as the option set. Some survey drafters may choose to omit the middle neutral option to “force” respondents to hop to one side of the fence or another. This is fine, so long as the options are balanced on either side.

To take it a step further, it is recommended to change the direction of choices for half of the sample size. In that case, 50% of respondents will see the question as above and another 50% will see the choices starting with “very dissatisfied”.

Understand the Survey Topic

Another aspect of important survey building is accuracy. The survey writer(s) should have at least basic knowledge on the topic to be able to include proper choices in the questions (e.g., spend levels on a product/service, options for popular products used, etc.).

On average, how much did you pay per month for your mobile cell phone bill in 2017?

  • Less than $1,000
  • $1,000 to $2,000
  • $2,001 to $3,000
  • $3,001 to $4,000
  • More than $4,000

In this case, the ranges are too big to adequately capture the likely spectrum of answers. If ranges are too big, as they are here, responses are likely to all fall into one bucket (Less than $1,000), and we miss the opportunity to analyze differences between responses.

If we re-write the question with more knowledge on the subject, answer choices might look like this:

On average, how much did you pay per month for your mobile cell phone bill in 2017?

  • Less than $20
  • $20 to $40
  • $41 to $60
  • $61 to $80
  • $81 to $100
  • More than $100

Consider a Logical Question Order

Question order is another important item on this list. People are influenced by the questions in the survey as well as many other factors. When we want to measure brand awareness we should not put any mention of that brand prior to the question that intends to gauge awareness.

We need to think about the logical flow of questions as well. For the vast majority of surveys, there are questions that are displayed based on answers in previous questions. To create a logical order, we need to plan the whole questionnaire. Apart from that, we always try to build a natural flow so the questions coming one after another are somehow connected. This allows our respondents to become storytellers and share their opinions with us.

Leverage Randomization as a Tool for Better Results

Data quality is an important element for accurate analyses. Every time we perform data quality checks we may find respondents who lost focus on the question and selected the first choice. The majority of these are identified and removed from the final dataset. Still, there is always the possibility that some of these were not identified. To minimize that effect we randomize the order of choices in question, so it changes for each interview. See below for examples of randomization:

(VERSION A) What electronic devices do you own? Please select all that apply.

  • Gaming console (e.g., PlayStation, Xbox, Nintendo Wii, etc.)
  • Smartphone (including iPhone)
  • Tablet (e.g., iPad, Amazon Fire, Samsung Galaxy, etc.) 
  • Computer (desktop/laptop)
  • Other (please specify)
  • I do not have any electronic devices

 (VERSION B) What electronic devices do you own? Please select all that apply.

  • Smartphone (including iPhone)
  • Computer (desktop/laptop)
  • Gaming console (e.g., PlayStation, Xbox, Nintendo Wii, etc.)
  • Tablet (e.g., iPad, Amazon Fire, Samsung Galaxy, etc.) 
  • Other (please specify)
  • I do not have any electronic devices

Most survey programming platforms have a function that automatically randomizes the answer choices on a given question. In more robust platforms, there may be an option to randomize a subset of those options while keeping a few static. Above, note that the last two answers do not change their position. The reason for this is to make the list logical. We need to know the whole list if we want to select “Other”. Also the “I do not have any electronic devices” option should be kept at the end. Respondents may go through the list and confirm if they had the same idea about electronic device definition.

It is important to know, that randomized order will not work for all questions. Let’s think again about the satisfaction question:

How satisfied or dissatisfied are you with our company’s services?

  • Very satisfied
  • Somewhat satisfied
  • Neither satisfied, nor dissatisfied
  • Somewhat dissatisfied
  • Very dissatisfied

Randomized order of choices would affect the scaled distribution and make this question harder to answer.

Not only do we use randomization for single questions. It is used as well for whole blocks of questions. Let’s think about the situation when we have 10 brands and we want to ask the same set of questions for each. To make it easier we assumed that a single respondent will be asked only 2 brands to limit the fatigue effect. In that case, we need to randomize the order of the blocks so each time we will randomly sample only 2 out of 10 blocks of questions.

As you may see, building surveys is a complex process and all these practices are just the tip of the iceberg.