Several survey research companies have conducted their own internal studies to discover the amount of time that a user spends on each question on surveys of varying lengths. It was found in some of these studies that surveys with only one or two questions received an average of almost a minute per question or more. Surveys with 15 or so questions received roughly 20 seconds per question, and surveys with more than 15 questions received as little as 10 seconds per question.
From there, these researchers made the following conclusions:
- The longer the survey, the less time a user spends critically thinking on each question.
- The longer the survey, the less likely the data is going to be accurate.
We’ve discussed the numerous problems that long surveys can create in the past. Beyond just dropout rate, they lead to habituation, central tendency theory, and other issues. These alone indicate that there is very likely a problem with critical thinking when the survey is very long, and that long surveys may not be getting accurate data.
However, while the problems are still there, the conclusions that these researchers draw may not be entirely accurate, especially with regard to time spent per question. There are plenty of other reasons that surveys with many questions receive less time. Consider all of the following:
Reasons Users Spend Less Time Per Question
First and foremost, the idea that users spend over a minute (some researchers claim as much as 90 seconds) for a one-question survey is doubtful. That is a considerably long time to spend on any given question, even if it is the only question on a survey, and as we’ve established multiple times in the past, there is rarely a respondent that cares enough about the outcome of the survey to give the question that much thought. So what could be happening?
The most likely reason is that the survey takes a while to load, and the respondents are unwilling to begin the survey until they’re sure it’s fully loaded. They may do other things while the survey is loading or simply prepare themselves to take the survey. They may also have opened the survey in a new tab and have not yet finished reading their email. All of these add a great deal of time to the start of a survey in a way that doesn’t carry over to additional questions.
Another reason is that if a survey has only one question, chances are it is a very important question. There may easily be very long text, or a video (which is going to skew the results of the survey), or something that gives that single question enough relevance to be useful to the researcher. Rarely is there a single question survey that has a simple and easy question to answer, nor should there be since the data wouldn’t be that useful.
To explain the shorter time spent on other questions, one needs to remember that the way that long surveys are presented has changed. It is not as though the user has one question per page. Most questions are broken down into question tables that make it easy for the user to fly through the questions, going down the list and answering questions based on a scale. These question tables do have habituation and central tendency issues, but they are also much easier to answer and take far less time than an individual question on a per-question basis.
Getting Into the Survey
One also shouldn’t discount the way that a user gets used to a survey as they’re completing it. For the first few questions they’re simply getting into the swing, but soon they’ll know exactly where the “next” button is, they’ll be used to the question wording, they’ll know what they’re meant to answer, and it becomes much easier to simply go through the remainder of the survey without as much difficulty.
It may also be erroneous to believe that speeding up does, itself, mean less critical thinking. Often the questions that someone is asked in a long survey do not require a considerable amount of thought, and while it would be nice if users spent a great deal of time on every question, it’s likely they would have come to the same conclusion for all non-technical or easy questions.
Does Speed Matter At All?
We’ve discussed possible reasons that the “time spent per question” may not be as important as some researchers tend to believe. The idea is that when your sample answers questions too quickly, they are not critically thinking and thus not giving accurate answers. But, that may be overstating how important the “time spent per question” measurement really is.
That is not to say that it doesn’t matter. However, speeding through a survey does produce all of the following problems:
- Greater Likelihood of Error: If you’re running through a survey, you increase the likelihood that you select the wrong answer, or misread a question. So while speed may not necessarily cause a lack of critical thinking, it can cause errors that still hurt the quality of your data.
- Indicative of Fake Answers: How fast someone takes a survey may also be indicative of false answers, or answers given without any thought (let alone critical thought) simply to get through the survey. This can occur in very long surveys.
- Lack of Critical Thinking: The point being driven home is that going through a survey quickly does not necessarily mean the answers would be any different if the user took their time on each question. But there are going to be some respondents that may not be taking time on each question. From there one needs to ask – if the survey was shorter and they spent more time answering, would they provide better answers?
So while there are a number of reasons that time spent per question may mean nothing, clearly there are also things that can arise.
This isn’t a study, so it’s possible that the amount of time someone spends on a question has a serious effect on the quality of the data. But there are many, many reasons to believe that 10 seconds per question is more than enough to answer each one with enough thought to get as relevant an answer as you can expect from anyone taking a survey – not the least of which because several question types (such as question tables) and getting used to the survey setup makes it much easier to answer questions quickly.
It’s certainly possible, maybe even probable, that surveys where respondents spent little time on each question are providing poor data. But it’s arguably just as plausible that the time spent on each question is more than enough, and the data that other researchers have found regarding the amount of time spent on each question on smaller surveys is inaccurate or unrelated to data quality.