Methodological studies usually gauge response quality in narrative open-ended questions with the proportion of nonresponse, response length, response time, and number of themes mentioned by ...respondents. However, not all of these indicators may be comparable and appropriate for evaluating open-ended questions in a cross-national context. This study assesses the cross-national appropriateness of these indicators and their potential bias. For the analysis, we use data from two web surveys conducted in May 2014 with 2,685 respondents and in June 2014 with 2,689 respondents and compare responses from Germany, Great Britain, the United States, Mexico, and Spain. We assess open-ended responses for a variety of topics (e.g., national identity, gender attitudes, and citizenship) with these indicators and evaluate whether they arrive at similar or contradictory conclusions about response quality. We find that all indicators are potentially biased in a cross-national context due to linguistic and cultural reasons and that the bias differs in prevalence across topics. Therefore, we recommend using multiple indicators as well as items covering a range of topics when evaluating response quality in open-ended questions across countries.
As ever more surveys are conducted, recruited respondents are more likely to already have previous survey experience. Furthermore, it has become more difficult to convince individuals to participate ...in surveys, and thus, incentives are increasingly used. Both previous survey experience and participation in surveys due to incentives have been discussed in terms of their links with response quality. This study aims to assess whether previous web survey experience and survey participation due to incentives are linked with three indicators of response quality: item non‐response, primacy effect and non‐differentiation. Analysing data of the probability‐based CROss‐National Online Survey panel covering Estonia, Slovenia and Great Britain, we found that previous web survey experience is not associated with item non‐response and the occurrence of a primacy effect but is associated with non‐differentiation. Participating due to the incentive is not associated with any of the three response quality indicators assessed. Hence, overall, we find little evidence that response quality is linked with either previous web survey experience or participating due to the incentive.
Design principles for survey questionnaires viewed on desktop and laptop computers are increasingly being seen as inadequate for the design of questionnaires viewed on smartphones. Insights gained ...from empirical research can help those conducting mobile surveys to improve their questionnaires. This article reports on a systematic literature review of research presented or published between 2007 and 2016 that evaluated the effect of smartphone questionnaire design features on indicators of response quality. The evidence suggests that survey designers should make efforts to “optimize” their questionnaires to make them easier to complete on smartphones, fit question content to the width of smartphone screens to prevent horizontal scrolling, and choose simpler types of questions (single-choice questions, multiple-choice questions, text-entry boxes) over more complicated types of questions (large grids, drop boxes, slider questions). Based on these results, we identify design heuristics, or general principles, for creating effective smartphone questionnaires. We distinguish between five of them: readability, ease of selection, visibility across the page, simplicity of design elements, and predictability across devices. They provide an initial framework by which to evaluate smartphone questionnaires, though empirical testing and further refinement of the heuristics is necessary.
Web surveys are commonly used in social research because they are usually cheaper, faster, and simpler to conduct than other modes. They also enable researchers to capture paradata such as response ...times. Particularly, the determination of proper values to define outliers in response time analyses has proven to be an intricate challenge. In fact, to a certain degree, researchers determine them arbitrarily. In this study, we use “SurveyFocus (SF)”—a paradata tool that records the activity of the web-survey pages—to assess outlier definitions based on response time distributions. Our analyses reveal that these common procedures provide relatively sufficient results. However, they are unable to detect all respondents who temporarily leave the survey, causing bias in the response times. Therefore, we recommend a two-step procedure consisting of the utilization of SF and a common outlier definition to attain a more appropriate analysis and interpretation of response times.
Online surveys have the advantages of affordability and speedy collection; however, there have been some concerns about their response quality. Previous studies have measured response quality using ...nonresponse rate and biased choice behavior. However, it is difficult to detect defective respondents who are randomly selected without careful consideration by using those indicators. Therefore, this study measured the accuracy rate as an index of response quality using a questionnaire with questions for which possible answers were provided. Questionnaire length was found to have a negative effect on response rate but no significant effect on the accuracy rate. In addition, considering the response device, surveys from smartphone users have a lower accuracy rate than surveys from personal computer users. Lastly, respondents who have a response rate faster than 10 seconds per question have a lower accuracy rate. It is important to understand the factors that affect response quality and to design surveys accordingly.
► Web survey: effect of personalization, remainders and post-incentives. ► Is useful to send remainder each 7
days to get increase the retention rate. ► Is useful to mix longer intervals combined ...with personalized e-mail messages. ► Personalization, reminders and incentives together do not improve response quality.
This study centers on three parameters that can influence responses to Web-based surveys: personalization, the periodicity of follow-up mailings and incentives based on prize draws. The results show the need to send a lower number of reminders with personalized e-mail messages when the aim is for respondents to complete the full questionnaire. In contrast, the use of post-incentives based on prize draws was not found to have a significant effect on retention rate when used alone or in combination with personalized messages and/or a lower number of reminders. Moreover, none of the above factors, except personalization, improves response quality when used separately or in conjunction.
The aim was to study if sports-specific reaction training using immersive virtual reality improves the response behavior of karate athletes. During ten sessions, 15 experienced young karate athletes ...responded to upcoming attacks of a virtual opponent. On one hand, in PRE and POST tests, we examined the sports-specific response behavior using the time for response (time between a defined starting point and the first reaction), response accuracy (according to a score system), and kind of response (direct attack or a blocking movement) based on a movement analysis. On the other hand, we analyzed the unspecific response behavior using the reaction time and motor response time based on the reaction test of the Vienna test system. Friedman tests with subsequent Dunn–Bonferroni post-hoc tests and one-factorial ANOVAs showed no significant differences (
p
> 0.05) in the unspecific parameters. However, significant improvements (
p
< 0.05) of the sports-specific parameters were found, leading to a higher increase within the intervention groups (large effects) compared to the control groups (small and moderate effects in time for response, and no significant effects in response quality). It can be concluded that VR training is useful to improve response behavior in young karate athletes.
This study examines the effect of the timing of follow-ups, different incentives, length, and presentation of the questionnaire on the response rate and response quality in an online experimental ...setting. The results show that short questionnaires have a higher response rate, although long questionnaires still generate a surprisingly high response. Furthermore, vouchers seem to be the most effective incentive in long questionnaires, while lotteries are more efficient in short surveys. A follow-up study revealed that lotteries with small prizes, but a higher chance of winning are most effective in increasing the response rate. Enhancing questionnaires with visual elements, such as product images, lead to a higher response quality and generate interesting interaction effects with the length of the questionnaire and the incentives used. Finally, the timing of the follow-up has no significant influence on the response rate.