In a web-based experiment with 1,750 randomly sampled university students, we investigated the effect of mailed prenotification plus prepaid cash, mailed prenotification plus a prepaid voucher, ...mailed prenotification plus a postpaid voucher, and mailed prenotification on its own as compared to a control group without prenotification or incentives. Dependent measures were response, retention, and item nonresponse. Mailed prenotification over no prenotification increased response and retention and decreased item nonresponse. Prenotification plus prepaid cash maximized response and retention. Item nonresponse was lowest with prenotification plus postpaid vouchers and second lowest with prenotification plus prepaid cash. In addition, we compared the cost for all experimental groups. Total costs were highest for prenotification plus prepaid cash, but costs per respondent or per retainee were highest in the control group. In sum, this experiment shows ways of improving participation in web surveys.
A key goal of survey interviews is to collect the highest quality data possible from respondents. In practice, however, it can be difficult to achieve this goal because respondents do not always ...understand particular survey questions as designers intended. Researchers have used a variety of indicators to identify and predict respondent confusion and difficulty in answering questions in different modes. In web surveys, it is possible to automatically detect response difficulty in real time. The research to date has focused on response latencies—mostly long response times—as evidence of difficulty. In addition to response latencies, however, web surveys offer rich behavioral data, which may predict respondent confusion and difficulty more directly than response times. This article focuses on one such behavior, mouse movements. We examine a set of mouse movements participants engage in when answering questions about experimental scenarios whose difficulty has been manipulated (as confirmed by respondent ratings). This approach makes it possible to determine which movements are general movements, demonstrating how a person interacts with a computer, and which movements are related to response difficulty. We find not only that certain mouse movements are highly predictive of difficulty but also that such movements add considerable value when used in conjunction with response times. The approach developed in this article may be useful in delivering help to confused respondents in real time and as a diagnostic tool to identify confusing questions.
Likert scales are popular for measuring attitudes, but response style, a source of measurement error associated with this type of question, can result in measurement bias. This study investigates the ...effect of data collection mode on both types of response styles using data from the 2012 American National Election Studies (ANES). 2012 was the 1 year in which ANES conducted two parallel surveys, one through face-to-face interviews and another through Web, using two independent national probability samples and an identical questionnaire. We used three sets of balanced Likert scales from the survey to measure the acquiescent and extreme response styles. Using the latent class analysis modeling approach, we find that: (1) both acquiescent and extreme response styles exist in both face-to-face and Web survey modes; (2) face-to-face respondents demonstrate more acquiescent and extreme response styles than Web respondents; (3) the mode effect is similar for white, black and Hispanic respondents.
Abstract
Web surveys are a common self-administered mode of data collection using written language to convey information. This language is usually accompanied by visual design elements, such as ...numbers, symbols, and graphics. As shown by previous research, such elements of survey questions can affect response behavior because respondents sometimes use interpretive heuristics, such as the “middle means typical” and the “left and top means first” heuristics when answering survey questions. In this study, we adopted the designs and survey questions of two experiments reported in Tourangeau, Couper, and Conrad (2004). One experiment varied the position of nonsubstantive response options in relation to other substantive response options and the second experiment varied the order of the response options. We implemented both experiments in an eye-tracking study. By recording respondents’ eye movements, we are able to observe how they read question stems and response options and we are able to draw conclusions about the survey response process the questions initiate. This enables us to investigate the mechanisms underlying the two interpretive heuristics and to test the assumptions of Tourangeau et al. (2004) about the ways in which interpretive heuristics influence survey responding. The eye-tracking data reveal mixed results for the two interpretive heuristics. For the middle means typical heuristic, it remains somewhat unclear whether respondents seize on the conceptual or visual midpoint of a response scale when answering survey questions. For the left and top means first heuristic, we found that violations of the heuristic increase response effort in terms of eye fixations. These results are discussed in the context of the findings of the original studies.
Not to Be Considered Harmful Sommer, Jana; Diedenhofen, Birk; Musch, Jochen
Social science computer review,
06/2017, Letnik:
35, Številka:
3
Journal Article
Recenzirano
The number of respondents who access web surveys on a mobile device (smartphone or tablet) has been increasing rapidly over the last few years. Compared with desktop computers, mobile devices have ...smaller screens, different input options, and are used in a larger variety of locations and situations. The suspicion that the quality of data may suffer when online respondents use mobile devices has stimulated a growing body of research, which has mainly focused on paradata and web survey design. To investigate whether the respondents’ device affects the quality of web survey data, we examined the responses of 1,826 mobile-device and desktop participants in a political online survey that asked questions about the 2013 German federal election. To determine the reliability and validity of data submitted via mobile devices, we determined the consistency of the participants’ responses across questions and validated the responses against various internal and external criteria. Replicating previous findings, mobile-device respondents were younger and more likely to be female, and they produced higher dropout rates and longer completion times than desktop respondents. However, data produced by respondents using mobile devices were as consistent, reliable, and valid as data produced by respondents using desktop computers. These findings contradict the notion that mobile-device users compromise the reliability and validity of data collected online and suggest that researchers do not necessarily need to be afraid of the participation of mobile-device respondents in web surveys.
Rating scales are used extensively in surveys. A rating scale can descend from the highest to the lowest point or from the positive to the negative pole. A rating scale can also start with the lowest ...point (or the negative pole) and ascend to the highest point (or the positive pole). Previous research has shown that the direction of the scale, i.e., the order of the response options, has an impact on responses, and that respondents are more likely to select response options close to the starting point of the scale, regardless of whether the scale ascends or descends. This paper advances the literature by examining empirically whether or not the response order effect in rating scale questions is driven by satisficing. Drawing on data from five experiments, we found that scale direction had a significant and extreme impact on response distributions. Although the effect of scale direction was stronger among speeders than non-speeders, the effect was observed across the board among those who were at a high risk of satisficing and those who were not.
The National Center for Health Statistics is assessing the usefulness of recruited web panels in multiple research areas. One research area examines the use of close-ended probe questions and ...split-panel experiments for evaluating question-response patterns. Another research area is the development of statistical methodology to leverage the strength of national survey data to evaluate, and possibly improve, health estimates from recruited panels. Recruited web panels, with their lower cost and faster production cycle, in combination with established population health surveys, may be useful for some purposes for statistical agencies. Our initial results indicate that web survey data from a recruited panel can be used for question evaluation studies without affecting other survey content. However, the success of these data to provide estimates that align with those from large national surveys will depend on many factors, including further understanding of design features of the recruited panel (e.g. coverage and mode effects), the statistical methods and covariates used to obtain the original and adjusted weights, and the health outcomes of interest.
Abstract
Panel surveys are increasingly experimenting with the use of self-administered modes of data collection as alternatives to more expensive interviewer-administered modes. As data collection ...costs continue to rise, it is plausible that future panel surveys will forego interviewer administration entirely. We examine the implications of this scenario for recruitment bias in the first wave of a panel survey of employees in Germany. Using an experimental multi-mode design and detailed administrative record data available for the full sample, we investigate the magnitude of two sources of panel recruitment bias: nonresponse and panel consent (i.e., consent to follow-up interview). Across 29 administrative estimates, we find relative measures of aggregate nonresponse bias to be comparable between face-to-face and self-administered (mail/Web) recruitment modes, on average. Furthermore, we find the magnitude of panel consent bias to be more severe in self-administered surveys, but that implementing follow-up conversion procedures with the non-consenters diminishes panel consent bias to near-negligible levels. Lastly, we find the total recruitment bias (nonresponse and panel consent) to be similar in both mode groups—a reassuring result that is facilitated by the panel consent follow-up procedures. Implications of these findings for survey practice and suggestions for future research are provided in conclusion.
The aim of this paper is to generate qualified information on technologies that are expected to be relevant to cancer care over the next thirty years (2017-2037). Drawing on the concepts of ...technology foresight, a methodology was developed for future technology research. Future technologies were identified by consulting editorials of journals specializing in oncology. Nine technologies were selected with the potential to impact cancer care in the future. Additionally, a method was developed for consulting a large number of experts from articles indexed in Thomson Reuters Web of Science. In this survey, more than 83,000 cancer specialists were invited to answer a web survey in which they expressed their expectations about the future of cancer care. The questionnaire was answered by 2408 specialists, 56% of whom stated they were highly knowledgeable experts. Our results show that antibody-related therapies, molecular imaging, and tumor delivery systems are the technologies most likely to be used in cancer care in the next thirty years. The main reasons pointed out for the choice of these technologies were improvements in the prognosis of the disease and improved diagnostic reliability. Meanwhile, knowledge and scientific barriers were highlighted as the main obstacles to the development of the technologies deemed to have more limited chances of success.