Learn more about how bias in your responses can lead to errors.
What is the difference between nonresponse bias and response bias? All biases can be placed under two categories: response bias and nonresponse bias. To understand bias it’s important to explain the differences between them.
In response bias, a respondent provides inaccurate or false answers to survey questions. Nonresponse bias is caused by the absence of participants—not by collecting erroneous data. Nonresponse bias can lead to a nonresponse error, an error that occurs when a survey fails to get a response to one or all of the questions. Both response and nonresponse bias need to be avoided to obtain accurate results.
Avoid errors in your research by letting us do the hard work for you. We offer purpose-built solutions for tackling a variety of research needs.
Nonresponse bias is a type of survey bias that occurs when survey participants are unwilling or unable to respond to a survey question or an entire survey. Reasons for nonresponse vary from person to person.
To be considered a form of bias, a source of error must be systematic in nature. Nonresponse bias is not an exception to this rule. If a survey method or design is created in a way that makes it more likely for certain groups of potential respondents to refuse to participate or be absent during a surveying period, it has created a systematic bias.
Take these two examples, for instance:
1. Asking for sensitive information: Consider a survey measuring tax payment compliance. Citizens who do not properly follow tax laws will be the most uncomfortable filling out this survey and be more likely to refuse. This will obviously bias the data towards a more law abiding net sample than the original sample. Nonresponse bias in surveys asking for legally sensitive information have been proven to be even more profound if the survey explicitly states that the government or another organization of authority is collecting the data!
2. Invitation issues: Many researchers create nonresponse bias because they do not pretest their invites properly. For example, a large portion of young adults and business sector workers answer the majority of their emails through their smartphones. If the survey invite is provided through an email that doesn’t render well on mobile devices, response rates in smartphone users will drop dramatically. This will create a net sample that under represents the opinions of the smartphone user demographic.
Learn more about total survey error and how to avoid it.
In addition to requests for sensitive information and invitation issues, there are several other causes of nonresponse bias, including poor survey design, wrong target audience, refusals, failed delivery, and accidental omission.
How long will your survey take to complete? Our research shows that abandon rates increase for surveys that take more than 7-8 minutes to complete, and completion rates drop from 5-20%. Make sure your survey is short and easy to understand to reduce the risk of nonresponse bias.
Before you send out your survey, ensure you’re using the right target audience. For example, a survey about working hours and wages sent to students and unemployed individuals will have fewer responses than if it is sent to employed people.
Some customers will just say “no” to completing a survey. It could be a bad day or time for them, or they may just not want to do it. Remember, just because they said “no” today doesn’t mean they won’t take one of your surveys another time.
It’s unfortunate that some surveys end up going directly into a spam folder. You might not even know that your survey wasn’t received, and it will just be recorded as a nonresponse. Before you send your survey out, we suggest you track respondents to know if your email was opened, how many clicked through to your survey, and who responded to your survey.
On occasion, someone will simply forget to complete your survey. It’s challenging to prevent this from happening, and hopefully, this is only a small number of your nonresponses.
Nonresponse bias can cause inconclusive results in your research due to a larger variance for estimates and the sample no longer being representative of the population as a whole.
In other words, a nonresponse error occurs due to an increase in the range of your responses related to the decrease in sample size. This may cause bias if the nonrespondents change the sample, so it no longer represents the larger study pool as a whole.
Response bias can be defined as the difference between the true values of variables in a study’s net sample group and the values of variables obtained in the results of the same study. This means that response bias is caused by any element in the research that makes its results different from the actual opinions or facts held by the respondents participating in the sample. Most often, this type of bias is caused by respondents giving inaccurate responses and answers being incorrectly recorded or misanalysed.
Nonresponse bias occurs when some respondents included in the sample do not respond. The key difference here is that the error comes from an absence of respondents instead of the collection of erroneous data. Put in more technical terms, nonresponse bias is the variation between the true mean values of the original sample list (people who are sent survey invites) and the true mean values of the net sample (actual respondents). Most often, this form of bias is created by refusals to participate or the inability to reach some respondents.
Let’s look at some examples of common nonresponse bias situations.
You hosted a successful leadership conference before COVID-19 shut down in-person experiences for a few years. Now, you’re planning to host an updated conference and want to send out an interest survey from past attendees. You still have their emails from the last event but haven’t updated the list since then. Your delivery and open rates are extremely low. It seems that many of your past attendees have changed companies and/or email addresses and no longer access the email boxes you’ve sent the new survey to.
This is nonresponse bias because you technically sent them the survey, but they never interacted with it.
You are conducting a survey to find out about opiate use in your community. You create and send a survey that contains questions about whether the respondents have taken opiates by prescription or have purchased them through other means. You receive very few responses, and the ones you receive are all individuals who either have never used opiates or only used them after surgery. It would have been helpful to inform your potential participants of your privacy practices, confidentiality protections, and if their responses were anonymous.
This ends up as nonresponse bias because your sample is no longer representative of the entire population in your study, and many people declined to interact with the survey.
You send a survey out to your current customers with instructions to complete the survey by the end of the month. Some of your potential respondents completed the survey right away and some set it aside to do when they had more time. Several of those who set it aside forgot to take it by month’s end. You receive less than half of the survey responses you expected.
This would be considered nonresponse bias because participants simply forgot to take your survey and you’re left with a sample that no longer represents the population for your study.
How could these situations have been avoided? Read on for tips on reducing nonresponse bias.
Nonresponse bias is almost impossible to eliminate completely, but there are a few ways to ensure that it is avoided as much as possible. Of course having a professional, well-structured and designed survey will help get higher completion rates, but here is a list of ways to tweak your research process to ensure that your survey has a low nonresponse bias:
As discussed in the example above, it is very important to ensure that your survey and its invites run smoothly through any medium or on any device your potential respondents might use. People are much more likely to ignore survey requests if loading times are long, questions do not fit properly on their screens, or they have to work to make the survey compatible with their device. The best advice is to acknowledge your sample`s different forms of communication software and devices and pre-test your surveys and invites on each, ensuring your survey runs smoothly for all your respondents.
Use an email before the survey goes out or an introduction to the survey when it’s sent to explain what your participant should expect from the survey. Include the survey goal, the approximate time it will take to complete, and any information about anonymity or confidentiality that you deem appropriate.
This is the perfect time to review your buyer personas to help you identify target audiences for your survey. Review customer accounts for those who have interacted with your brand in the past and may want to participate in providing feedback. Gain valuable insights and connect with customers that may be in risk of churn.
One of the worst things a researcher can do is limit their data collection time in order to comply with a strict deadline. Your study’s level of nonresponse bias will climb dramatically if you are not flexible with the time frames respondents have to answer your survey. Fortunately, flexibility is one of the main advantages to online surveys since they do not require interviews (phone or in person) that must be completed at certain times of the day.
However, keeping your survey live for only a few days can still severely limit a potential respondent’s ability to answer. Instead, it is recommended to extend a survey collection period to at least two weeks so that participants can choose any day of the week to respond according to their own busy schedules.
Probability sampling is a method of selecting participants where every member of your target population has a known, non-zero chance of being chosen. Think of it like a lottery where everyone has a fair shot at being picked, though some people might have different odds of selection depending on the sampling design. This approach helps ensure your sample is representative of the whole population you're studying, making your survey results more reliable.
It’s important to include an option for participants to opt-out of answering certain questions. You can do this by not requiring answers to all questions or providing a multiple-choice option that participants can use to omit the question, such as “prefer not to answer.”
Double-barreled questions are those that mention more than one issue but only allow for one answer to cover everything. These questions are confusing and tough to answer. For example, if you ask, “Were the host and wait staff polite and helpful?” asks the respondent to rate the host and the wait staff with one answer when it would be better addressed with two questions.
Put your ego aside and offer all options in your survey questions. Rather than ask, “How was our service?” and only offer Good, Great, and Excellent as choices, use a Likert scale to provide a full range of response options—without any researcher bias.
Make your survey easy to answer with closed-ended questions like Likert scales and multiple-choice questions. The survey is easier and faster to complete with a fixed number of responses.
Sending a few reminder emails throughout your data collection period has been shown to effectively gather more completed responses. It is best to send your first reminder email midway through the collection period and the second near the end of the collection period. Make sure you do not harass the people on your email list who have already completed your survey!
Any survey that requires information that is personal in nature should include reassurance to respondents that the data collected will be kept completely confidential. This is especially the case in surveys that are focused on sensitive issues. Make certain someone reading your invite understands that the information they provide will be viewed as part the whole sample and not individually scrutinized.
Many people refuse to respond to surveys because they feel they do not have the time to spend answering questions. An incentive is usually necessary to motivate people into taking part in your study. Depending on the length of the survey, the difficulty in finding the correct respondents (ie: one-legged, 15th-century spoon collectors), and the information being asked, the incentive can range from minimal to substantial in value. Remember, most respondents won’t have an invested interest in your study and must feel that the survey is worth their time!
When should you send your survey for the highest number of respondents? How should you distribute your survey most effectively? Some of this timing and delivery is trial and error, but we can say that response rates are generally higher on Monday and lowest on Friday. Whether you send your survey by web link, email, website, social media, or through SurveyMonkey Audience, it all depends on your target audience and what’s relevant to them.
Be sure to send a thank you or follow-up email to let respondents know that their input is appreciated and their responses will be addressed and applied as you seek to improve your products and services. Participants will be happy to know that their feedback will have an impact.
Avoid nonresponse bias and errors by following our best practices and tips and tricks. And of course, you can reduce nonresponse bias with our suite of market research tools, including SurveyMonkey Audience, Brand Tracker, and survey platform.
Discover our toolkits, designed to help you leverage feedback in your role or industry.
Get the best data from your survey. Learn how to find survey respondents people with these tools and tips from our survey research experts.
Enhance your survey response rates with 20 free email templates. Engage your audience and gather valuable insights with these customizable options!