How to display questions that are part of a battery in self-administered surveys is an important decision. Battery items may be displayed in a grid in a mail survey or computer web survey, but are ...often displayed as individual items on mobile devices. Although past research has compared grids to item-by-item displays in computer and mobile web surveys, almost no work has compared these displays in mail surveys. Additionally, many web survey templates use wide rectangular buttons to select response options in individual items using a mobile-optimized design, different from the standard round answer space format typically used in mail surveys. In this study, we experimentally test grid versus item-by-item displays and round radio buttons versus wide rectangular buttons for battery items in a probability-based general population mixed-mode mail + web survey of adults in Nebraska. Consistent with past research, we find that item-by-item displays reduce straightlining rates compared to grid designs. We also find that respondents are less likely to select the last two response categories in the item-by-item displays than in the grid displays. Smartphone and computer web respondents have higher item nonresponse rates than mail respondents, and web respondents have lower straightlining rates than mail respondents, accounting for respondent characteristics. Reassuringly, there is no difference in data quality outcomes across radio button versus wide button formats. These findings replicate past research that item-by-item displays reduce straightlining but may shift answer categories. These findings suggest that questionnaire designers can combine round radio button answer spaces on mail surveys with wide buttons on web surveys on battery items with little difference in data quality.
Recent developments in communication technology and changes in people’s communication habits facilitate new data collection forms in web surveys. Technical devices, such as computers, tablets, and ...smartphones, enable researchers to rethink established communication forms and add a human touch to web surveys. Designing web surveys more human-like has the great potential to make communication between researchers and respondents more natural, which may result in higher survey satisfaction and data quality. Considering the existing survey literature, there are only a few studies investigating respondents’ willingness for new communication forms in web surveys. Hence, in the present study, we explore respondents’ willingness to take part in web surveys to have interviewers read questions via pre-recorded videos (question delivery) and in which respondents provide their answers orally via self-recorded videos (question answering). We included two willingness questions – one on question delivery via pre-recorded videos and one on question answering via self-recorded videos – in the non-probability SoSci panel in Germany. The results reveal that respondents’ willingness to have questions read by interviewers is higher than their willingness to self-record video answers. Believing that technology facilitates communication and perceiving the survey as being interesting increases willingness, whereas evaluating the survey topic as sensitive decreases willingness. Personality traits do not play a role when it comes to respondents’ willingness, except for extraversion.
This study meta-analyzes thirty-nine study results published within last ten years that directly compared Web and mail survey modes. Although considerable variation exists across the studies, the ...authors' findings show that mail surveys have higher response rates than Web surveys in general. Two study features (i.e., population types and follow-up reminders) are shown to contribute statistically to the variation of response rate differences between Web and paper surveys in the comparative studies. College respondents appear to be more responsive to Web surveys, while some other respondents (e.g., medical doctors, school teachers, and general consumers) appear to prefer traditional mail surveys. Follow-up reminders appear to be less effective for Web survey respondents than for mail survey respondents. Other study features (i.e., implementation of random assignment of survey respondents, incentives, and publication year) are not statistically useful in accounting for the variation of response rate differences between Web and mail surveys.
Research on mixed devices in web surveys is in its infancy. Using a randomized experiment, we investigated device effects (desktop PC, tablet and mobile phone) for six response formats and four ...different numbers of scale points. N = 5,077 members of an online access panel participated in the experiment. An exact test of measurement invariance and Composite Reliability were investigated. The results provided full data comparability for devices and formats, with the exception of continuous Visual Analog Scale (VAS), but limited comparability for different numbers of scale points. There were device effects on reliability when looking at the interactions with formats and number of scale points. VAS, use of mobile phones and five point scales consistently gained lower reliability. We suggest technically less demanding implementations as well as a unified design for mixed-device surveys.
A large number of findings in survey research suggest that misreporting in sensitive questions is situational and can vary in relation to context. The methodological literature demonstrates that ...social desirability biases are less prevalent in self-administered surveys, particularly in Web surveys, when there is no interviewer and less risk of presenting oneself in an unfavorable light. Since there is a growing number of users of mobile Web browsers, we focused our study on the effects of different devices (PC or cell phone) in Web surveys on the respondents’ willingness to report sensitive information. To reduce selection bias, we carried out a two-wave cross-over experiment using a volunteer online access-panel in Russia. Participants were asked to complete the questionnaire in both survey modes: PC and mobile Web survey. We hypothesized that features of mobile Web usage may affect response accuracy and lead to more socially desirable responses compared to the PC Web survey mode. We found significant differences in the reporting of alcohol consumption by mode, consistent with our hypothesis. But other sensitive questions did not show similar effects. We also found that the presence of familiar bystanders had an impact on the responses, while the presence of strangers did not have any significant effect in either survey mode. Contrary to expectations, we did not find evidence of a positive impact of completing the questionnaire at home and trust in data confidentiality on the level of reporting. These results could help survey practitioners to design and improve data quality in Web surveys completed on different devices.
The increasing use of web-based surveys in social sciences research has brought forth the challenge of effectively identifying and managing inattentive/careless responding. The existing detection ...methods have shown limited success, highlighting the need for improved methodologies. This study introduces a novel approach that utilizes time-stamped action sequence data of mouse movements and employs deep learning models to detect careless responding. It introduces the concept of Approximate Areas of Interest (AAOIs) along with the application of Gated Recurrent Units (GRUs) and Bidirectional Long Short-Term Memory (BiLSTM) models. This research presents a flexible and efficient tool that can be applied across different scales and survey contexts. The results demonstrate the superior performance of the proposed approach in identifying group membership, achieving up to 95% accuracy when tested on experimental data with induced inattentiveness. The presented approach offers a potentially promising tool for overcoming the pervasive challenge of detecting careless responding in computer-based surveys.
he first objective of this article is to propose a conceptual framework of the effects of on-line questionnaire design on the quality of collected responses. Secondly, we present the results of an ...experiment where different protocols have been tested and compared in a randomised design using the basis of several quality indexes. Starting from some previous categorizations, and from the main factors identified in the literature, we first propose an initial global framework of the questionnaire and question characteristics in a web survey, divided into five groups of factors. Our framework was built to follow the response process successive stages of the contact between the respondent and the questionnaire itself. Then, because it has been studied in the survey methodology literature in a very restricted way, the concept of `response quality' is discussed and extended with some more `qualitative' criteria that could be helpful for researchers and practitioners, in order to obtain a deeper assessment of the survey output. As an experiment, on the basis of the factors chosen as major characteristics of the questionnaire design, eight versions of a questionnaire related to young people's consumption patterns were created. The links to these on-line questionnaires were sent in November 2005 to a target of 10,000 young people. The article finally presents the results of our study and discusses the conclusions. Very interesting results come to light; especially regarding the influence of length, interaction and question wording dimensions on response quality. We discuss the effects of Web-questionnaire design characteristics on the quality of data.
Surveys are a fundamental tool of empirical research, but they suffer from errors: in particular, respondents can have difficulties recalling information of interest to researchers. Recent ...technological developments offer new opportunities to collect data passively (i.e., without participant’s intervention), avoiding recall errors. One of these opportunities is registering online behaviors (e.g., visited URLs) through tracking software (“meter”) voluntarily installed by a sample of individuals on their browsing devices. Nevertheless, metered data are also affected by errors and only cover part of the objective information, while subjective information is not directly observable. Asking participants about such missing information by means of web surveys conducted in the moment an event of interest is detected by the meter has the potential to fill the gap. However, this method requires participants to be willing to participate. This paper explores the willingness to participate in in-the-moment web surveys triggered by online activities recorded by a participant-installed meter. A conjoint experiment implemented in an opt-in metered panel in Spain reveals overall high levels of willingness to participate among panelists already sharing metered data, ranging from 69% to 95%. The main aspects affecting this willingness are related to the incentive levels offered. Limited differences across participants are observed, except for household size and education. Answers to open questions also confirm that the incentive is the key driver of the decision to participate, whereas other potential problematic aspects such as the limited time to participate, privacy concerns, and discomfort caused by being interrupted play a limited role.
Answers from open-ended questions are a valuable part of journalism surveys. However, due to the expense and difficulties involved in manual coding, the current situation, in which open-ended ...questions are used and analyzed in large-scale web surveys, is not satisfactory. This article reviews the types, coding tasks, and automatic coding techniques of open-ended questions. We propose a five-step procedure on how to analyze open-ended questions through an automatic coding approach, the process of which is as follows: (a) locate the type of open-ended question, (b) choose the corresponding coding task, (c) adopt the appropriate automatic coding techniques, (d) perform the analysis, and (e) evaluate and interpret the results. We demonstrate this with survey data from Reuters Digital News Reports of 2019. Our proposed framework can serve as a practical guide for analyzing open-ended questions with automatic coding and also promote open science in journalism research. We conclude that although automatic coding cannot entirely replace human coding, the constant refinement of the statistical models and the promotion of open sharing of textual data will gradually render autocoding a standard tool for researchers in journalism and communication.
•Many respondents are unwilling to use a smartphone to answer a web survey.•A selection bias could lead to overestimating the network size on smartphones.•Smartphone usage did not have a negative ...effect on the network size.•Smartphones can effectively be used for network research in tech-savvy populations.•An open question can be used to identify respondents who show satisficing behavior.
The increasing use of smartphones around the world provides new opportunities for network data collection using smartphone surveys. We investigated experimentally whether the use of smartphones and of a recall aid affects the number of reported names in a network name generator question. In a German online access panel (N = 3891), respondents were randomly assigned to answer the survey on their PC or on their smartphone and were randomly assigned to receive an open-ended recall aid question before the name generator question or after. Results showed that respondents on PCs and smartphones reported the same number of network contacts. This suggests that smartphone surveys have no negative effect on the network sizes in ego-centered network studies. However, requiring people to answer on smartphones resulted in a selection bias due to non-compliance, which may have led to an overrepresentation of persons with larger network sizes. The recall aid question did not lead to more reported names, but it proved to be an indicator of respondents’ motivation and response quality. In sum, the study suggests that smartphones can effectively be used for network research in tech-savvy populations or when respondents can choose to complete the survey on another device.