Web surveys can be programmed to capture a variety of paradata regarding how respondents answer questions. These paradata provide great opportunities for researchers to assess response quality, ...specifically whether respondents engage in satisficing – not spending enough effort to provide accurate responses. In particular, speeding (i.e., giving answers very quickly) has increasingly been used as an indicator for satisficing and low response quality. However, few studies have examined whether speeding actually leads to compromised response quality. To address this gap in the literature, the current study investigates speeding behaviors among Web respondents from a probability-based panel. We first identify and characterize respondents who speed more frequently than others over the entire questionnaire. To explore the impact of speeding on response quality, we then examine whether respondents who speed more frequently also straightline in more grid questions. The results show that the tendency to speed is related to several respondent characteristics, particularly age (younger respondents are more likely to speed). This study also reveals that more speeding seems to be universally related to more straightlining, and this relationship is particularly strong among the less educated respondents.
Web surveys are often used to collect data for psychological research. However, the inclusion of many inattentive respondents can be a problem. Various methods for detecting inattentive respondents ...have been proposed, most of which require the inclusion of additional items in the survey for detection or the calculation of variables for detection after data collection. This study proposes a method for detecting inattentive respondents in web surveys using machine learning. The method requires only the collection of response time and the inclusion of a Likert scale, eliminating the need to include special detection items in the survey. Based on data from 16 web surveys, a method was developed using predictor variables not included in existing methods. While previous machine learning methods for detecting inattentive respondents can only be applied to the same surveys as the data on which the models were developed, the proposed model is generic and can be applied to any questionnaire as long as response time is available, and a Likert scale is included. In addition, the proposed method showed partially higher accuracy than existing methods.
After briefly summarizing why there is a need to enhance web survey data, this paper explains how metered, geolocation, visual and voice data could help to supplement conventional web survey data, ...particularly when mobile participation is high. It presents expected benefits of these four data types in terms of respondents’ burden, data quality and possible new insights, as well as a number of expected disadvantages, both on the respondents’ and researchers’ sides. Finally, the paper discusses what is still missing and the next steps to turn these newopportunities into realities.
The considerable growth in the number of smart mobile devices with a fast Internet connection provides new challenges for survey researchers. In this article, I compare the data quality between two ...survey modes: self-administered web surveys conducted via personal computer and those conducted via mobile phones. Data quality is compared based on five indicators: (a) completion rates, (b) response order effects, (c) social desirability, (d) non-substantive responses, and (e) length of open answers. I hypothesized that mobile web surveys would result in lower completion rates, stronger response order effects, and less elaborate answers to open-ended questions. No difference was expected in the level of reporting in sensitive items and in the rate of non-substantive responses. To test the assumptions, an experiment with two survey modes was conducted using a volunteer online access panel in Russia. As expected, mobile web was associated with a lower completion rate, shorter length of open answers, and similar level of socially undesirable and non-substantive responses. However, no stronger primacy effects in mobile web survey mode were found.
Cooperative Survey Research Ansolabehere, Stephen; Rivers, Douglas
Annual review of political science,
01/2013, Letnik:
16, Številka:
1
Journal Article
Recenzirano
Odprti dostop
The rise of the Internet has radically altered survey research by changing how we think about sampling, driving down the cost of interviewing, and creating new ways of asking questions. This ...technology has also opened the way to a new style of cooperatively organized survey research. Projects such as the Cooperative Congressional Election Study (CCES) and the Cooperative Campaign Analysis Project (CCAP) involve collaborations of dozens of research teams that can collect very large samples and many smaller surveys tailored to the research questions of particular teams. This review examines the organization and key findings of these projects as well as their sampling methodology and its validity. Of particular importance, this article offers a direct comparison of the CCES with actual election results and the American National Election Studies (ANES), showing that the new survey approach yields highly accurate results that replicate the correlation structure of the ANES.
Identifying inattentive respondents in self-administered surveys is a challenging goal for survey researchers. Instructed response items (IRIs) provide a measure for inattentiveness in grid questions ...that is easy to implement. The present article adds to the sparse research on the use and implementation of attention checks by addressing three research objectives. In a first study, we provide evidence that IRIs identify respondents who show an elevated use of straightlining, speeding, item nonresponse, inconsistent answers, and implausible statements throughout a survey. Excluding inattentive respondents, however, did not alter the results of substantive analyses. Our second study suggests that respondents’ inattentiveness partially changes as the context in which they complete the survey changes. In a third study, we present experimental evidence that a mere exposure to an IRI does not negatively or positively affect response behavior within a survey. A critical discussion on using IRI attention checks concludes this article.
With the growing possibilities for conducting web surveys, researchers increasingly use such surveys to recruit student samples for research purposes in a wide array of social science disciplines. ...Simultaneously, higher education students are recurrently asked to complete course and teacher evaluations online and to participate in small-scale research projects of fellow students, potentially leading to survey fatigue among student populations across the globe. One of the most frequently reported effects of over-surveying is a decrease in overall response rates. This situation has significant impacts on the generalizability and external validity of findings based on web surveys. The collection of reliable data is, nevertheless, crucial for researchers as well as educational practitioners and administrators, and strategies should be developed for achieving acceptable response rates. This paper reports on a methodological experiment (N = 15,651) conducted at the University of Antwerp, Belgium, in which possible strategies to improve survey response are explored. I specifically focus on the impact of an extra reminder as well as specific reminder contents on response rates. The results reveal that extra reminders are effective for increasing response rates, but not for diversifying the sample.
This review focuses on recent methodological and technological developments in survey data collection. Surveys are facing unprecedented challenges from both societal and technological changes. ...Against this backdrop, I review the survey profession's response to these challenges and developments to enhance and extend the survey tool. I discuss the decline in random digit dialing and the rise of address-based sampling, along with the corresponding shift from telephone surveys to self-administered (mail and or Web) modes. I discuss the rise in nonprobability sampling approaches, especially those associated with online data collection. I also review so-called big data alternatives to surveys. Finally, I discuss a number of recent methodological and technological trends designed to modernize the survey method. I conclude that although they face a number of major challenges, surveys remain a robust and flexible method for collecting data on, and making inference to, populations.
Web Versus Mobile Web Keusch, Florian; Yan, Ting
Social science computer review,
12/2017, Letnik:
35, Številka:
6
Journal Article
Recenzirano
Due to a rising mobile device penetration, Web surveys are increasingly accessed and completed on smartphones or tablets instead of desktop computers or laptops. Mobile Web surveys are also gaining ...popularity as an alternative self-administered data collection mode among survey researchers. We conducted a methodological experiment among iPhone owners and compared the participation and response behavior of three groups of respondents: iPhone owners who started and completed our survey on a desktop or laptop PC, iPhone owners who self-selected to complete the survey on an iPhone, and iPhone owners who started on a PC but were requested to switch to iPhone. We found that respondents who completed the survey on a PC were more likely to be male, to have a lower educational level, and to have more experience with Web surveys than mobile Web respondents, regardless of whether they used the iPhone voluntarily or were asked to switch from a PC to an iPhone. Overall, iPhone respondents had more missing data and took longer to complete the survey than respondents who answered the questions on a PC, but they also showed less straightlining behavior. There are only minimal device differences on survey answers obtained from PCs and iPhones.
► This study examines the drivers of web survey response rates in a sample of over 24,000 scientists and engineers. ► Subjects were randomly assigned into conditions with different static and dynamic ...contact design features. ► Features included personalization, incentives, exact timing, delay between contacts, and change in features over the survey life cycle. ► Design features significantly increased the odds of a response by up to 48%. ► The paper concludes with detailed recommendations for survey researchers.
Web surveys have become increasingly central to innovation research but often suffer from low response rates. Based on a cost–benefits framework and the explicit consideration of heterogeneity across respondents, we consider the effects of key contact design features such as personalization, incentives, and the exact timing of survey contacts on web survey response rates. We also consider the benefits of a “dynamic strategy”, i.e., the approach to change features of survey contacts over the survey life cycle. We explore these effects experimentally using a career survey sent to over 24,000 junior scientists and engineers. The results show that personalization increases the odds of responding by as much as 48%, while lottery incentives with a high payoff and a low chance of winning increase the odds of responding by 30%. Furthermore, changing the wording of reminders over the survey life cycle increases the odds of a response by over 30%, while changes in contact timing (day of the week or hour of the day) did not have significant benefits. Improvements in response rates did not come at the expense of lower data quality. Our results provide novel insights into web survey response behavior and suggest useful tools for innovation researchers seeking to increase survey participation.