Using mobile devices to complete web-based surveys is an inescapable trend. Given the growth of this medium, some researchers are concerned about whether mobile devices are a viable channel for ...administering self-report online surveys. Taking two online surveys respectively using the US and China samples, this study compared the responses quality between participants responding via mobile devices and via PCs. Results from both the US and China samples revealed that although mobile respondents took longer to complete surveys than PC respondents, response quality did not differ significantly between these groups. Several behaviour patterns among mobile respondents were also identified in both samples. These findings provide practical implications to optimize web-based surveys for mobile users in tourism and hospitality research.
This study explored the influence of Internet memes, specifically image macros of animals with motivational captions, on survey respondents’ engagement with the survey-taking experience and ...subsequent data quality. A web-based field experiment was conducted with online survey respondents from two sample sources, one crowdsourced, and one commercially managed online panel. Half of the respondents from each sample source were randomly selected to see the memes at various points throughout the survey; the other half did not. Direct and indirect measures of survey engagement and response quality were used to assess effectiveness of the memes. Quantitative results were inconclusive, with few significant differences found in measures of engagement and data quality between respondents in the meme or control condition in either sample source. However, qualitative open-ended comments from respondents who saw the memes in both sample groups revealed that memes provide respondents a fun break and relief from the cognitive burdens of answering online survey questions. In conclusion, memes represent a relatively inexpensive and easy way for survey researchers to connect with respondents and show appreciation for their time and effort.
El objetivo de este trabajo es conocer hasta qué punto la utilización secuencial de modos autoadministrados y administrados (encuesta telefónica y presencial) producen cambios en las preguntas (de ...contenido) de un cuestionario. Se utiliza una encuesta sobre hábitos lingüísticos realizada en una región bilingüe española. Una encuesta autoadministrada online aplicada a toda la muestra consigue una colaboración del 44,3% de la muestra. Las personas que no han respondido son contactadas con encuestas administradas (telefónica y presencial). El modo autoadministrado online, como se esperaba, presenta una mayor utilización de la lengua de la comunidad. Ahora bien, cuando se considera los factores que más influyen en esta respuesta se aprecia que las variables sociodemográficas y el lugar de nacimiento presentan más influencia que el modo de administración utilizado.
The increased use of smartphones in web survey responding did not only raise new research questions but also fostered new ways to research survey completion behavior. Smartphones have many built-in ...sensors, such as accelerometers that measure acceleration (i.e., the rate of change of velocity of an object over time). Sensor data establish new research opportunities by providing information about physical completion conditions that, for instance, can affect response quality. In this study, we explore three research questions: (1) To what extent do respondents accept to comply with motion instructions? (2) What variables affect the acceleration of smartphones? (3) Do different motion levels affect response quality? We conducted a smartphone web survey experiment using the Netquest opt-in panel in Spain and asked respondents to stand at a fix point or walk around while answering five single questions. The results reveal high compliance with motion instructions, with compliance being higher in the standing than in the walking condition. We also discovered that several variables, such as the presence of third parties, increase the acceleration of smartphones. However, the quality of responses to the five single questions did not differ significantly between the motion conditions, a finding that is in line with previous research. Our findings provide new insights into how compliance changes with motion tasks and suggest that the collection of acceleration data is a feasible and fruitful way to explore survey completion behavior. The findings also indicate that refined research on the connection between motion levels and response quality is necessary.
Web survey respondents are frequently distracted during survey completion, which potentially affects the quality of data they provide. This article reports on results from a laboratory experiment ...examining how distractions during web survey completion influence data quality. Participants were randomly assigned to experimental groups using a 2 (device type) × 3 (form of distraction) between-subject factorial design. They were asked to complete a web questionnaire on either a PC or a tablet and were allocated to one of the three distraction conditions: (1) the presence of other people in the room who have a loud conversation, (2) the presence of music, or (3) no distraction. The study examines the effect of distraction on various measures of data quality and attentiveness. While participants felt significantly more distracted in the presence of other people or music, the study found no significant effect of distraction for any of the data quality and attentiveness measures. The findings are encouraging for survey practitioners: Even if web respondents listen to music or are in noisy environments, these forms of distraction generally do not seem to affect the quality of responses they provide.
In web questionnaires which are created in paging design where each question is on a separate page, a progress indicator is an element that should inform the respondent about their current position ...within the questionnaire. Linear progress indicators are commonly used, and sometimes fast-to-slow progress indicators are used for research purposes. In this paper, we programmed an individually adapted progress indicator which monitors respondents' answers, validates them in real time and gives the respondent an additional motivational impulse (in the form of acceleration) only if necessary. We were interested to see how such a progress indicator affects the response quality and dropout rate of respondents in comparison with linear and fast-to-slow progress indicators. According to the results of this study it appears that individually adapted progress indicators increases a participant's commitment to finish the survey, increases the time devoted to responding and the number of answers given.
Methodological studies usually gauge response quality in narrative open-ended questions with the proportion of nonresponse, response length, response time, and number of themes mentioned by ...respondents. However, not all of these indicators may be comparable and appropriate for evaluating open-ended questions in a cross-national context. This study assesses the cross-national appropriateness of these indicators and their potential bias. For the analysis, we use data from two web surveys conducted in May 2014 with 2,685 respondents and in June 2014 with 2,689 respondents and compare responses from Germany, Great Britain, the United States, Mexico, and Spain. We assess open-ended responses for a variety of topics (e.g., national identity, gender attitudes, and citizenship) with these indicators and evaluate whether they arrive at similar or contradictory conclusions about response quality. We find that all indicators are potentially biased in a cross-national context due to linguistic and cultural reasons and that the bias differs in prevalence across topics. Therefore, we recommend using multiple indicators as well as items covering a range of topics when evaluating response quality in open-ended questions across countries.
Purpose
Perimetry is a both demanding and strenuous examination method that is often accompanied by signs of fatigue, leading to false responses and thus incorrect results. Therefore, it is essential ...to monitor the response quality. The purpose of this study was to evaluate the response time (RT) and its variability (RTV) as quality indicators during static automated perimetry.
Methods
Size III Goldmann stimuli (25.7′) were shown with the OCTOPUS 900 perimeter in four visual field locations with 13 different stimulus luminance levels (0.04–160 cd/m
2
). An increased rate of false-positive and false-negative catch trials (25% each) served to monitor the response quality simultaneously together with response time recording. Data evaluation was divided into global and individual analysis. For global analysis, the agreement indices (AI, agreement between time periods with an increased number of false responses to catch trials
and
time periods with pathological response to time-based values set into relation to time periods in which only one of the two criteria was considered pathological) and for individual analysis, the Spearman correlation coefficients were calculated. Ophthalmologically normal subjects with a visual acuity ≥ 0.8, and a maximum spherical/cylindrical ametropia of ± 8.00/2.50 dpt were included.
Results
Forty-eight subjects (18 males, 30 females, age 22–78 years) were examined. The total number of false responses to catch trials was (median/maximum): 6/82. RT and RTV were compared to the occurrence of incorrect responses to catch trials. The resulting individual Spearman correlation coefficients (median/maximum) were for RT:
ρ
RT
= 0.05/0.35 and for RTV:
ρ
RTV
= 0.27/0.61. The global analysis of the RTV showed agreement indices (median/maximum) of AI
RTV
= 0.14/0.47.
Conclusions
According to this study, an increased portion of catch trials is suitable as a verification tool for possible response quality indicators. The RTV is a promising parameter for indicating the response quality.
As ever more surveys are conducted, recruited respondents are more likely to already have previous survey experience. Furthermore, it has become more difficult to convince individuals to participate ...in surveys, and thus, incentives are increasingly used. Both previous survey experience and participation in surveys due to incentives have been discussed in terms of their links with response quality. This study aims to assess whether previous web survey experience and survey participation due to incentives are linked with three indicators of response quality: item non‐response, primacy effect and non‐differentiation. Analysing data of the probability‐based CROss‐National Online Survey panel covering Estonia, Slovenia and Great Britain, we found that previous web survey experience is not associated with item non‐response and the occurrence of a primacy effect but is associated with non‐differentiation. Participating due to the incentive is not associated with any of the three response quality indicators assessed. Hence, overall, we find little evidence that response quality is linked with either previous web survey experience or participating due to the incentive.