With the rising popularity of web surveys and the increasing use of paradata by survey methodologists, assessing information stored in user agent strings becomes inevitable. These data contain ...meaningful information about the browser, operating system, and device that a survey respondent uses. This article provides an overview of user agent strings, their specific structure and history, how they can be obtained when conducting a web survey, as well as what kind of information can be extracted from the strings. Further, the user written command parseuas is introduced as an efficient means to gather detailed information from user agent strings. The application of parseuas is illustrated by an example that draws on a pooled data set consisting of 29 web surveys.
Surveys completed on mobile web devices (smartphones) have been found to take longer than surveys completed on a PC. This has been found both in surveys where respondents can choose which device they ...use and in surveys where respondents are randomly assigned to devices. A number of potential explanations have been offered for these findings, including (1) slower transmission over cellular or Wi-Fi networks, (2) the difficulty of reading questions and selecting responses on a small device, and (3) the increased mobility of mobile web users who have more distractions while answering web surveys. In a secondary analysis of student surveys, we find that only about one-fifth of the time difference can be accounted for by transmission time (between-page time) with the balance being within-page time differences. Using multilevel models, we explore possible page-level (question-level) and respondent-level factors that may contribute to the time difference. We find that much of the time difference can be accounted for by the additional scrolling required on mobile devices, especially for grid questions.
Historically, web-based economic valuation surveys published in the literature have generally been rather sparse on providing details of their web development and administration. Given the many ...survey development choices and consequences that are particular to web-surveys, the lack of a common reporting standard on the survey administration part can make it very difficult to judge the quality and validity of results obtained in a web-based valuation survey. This paper provides such a reporting checklist for stated preference valuation surveys developed for and administered through the web. The checklist is developed based on the bulk of knowledge gained so far with web-based surveys. This knowledge is compiled based on an extensive review of relevant literature dated from 2001 to beginning of 2015 in the Scopus database. Somewhat surprisingly, relatively few papers are concerned with survey mode effects or with the new opportunities for experimentation that the web survey mode presents. In relation to this, our paper also outlines future research opportunities and directions that would seem particularly relevant to further investigate and validate the increasing use of web-based questionnaires for economic valuation surveys.
•Up-to-date literature review on web surveys in stated preference valuation.•Compilation of checklist on web surveys in stated preference valuation.•Statements about current web surveys in stated preference valuation.•Survey mode ranking vis-à-vis the web mode.
As companies increasingly conduct marketing research online (e.g., through social networking sites or their brand community platforms), the knowledge that others are also filling out the same surveys ...becomes increasingly salient to respondents. This research examines how the salience of this knowledge influences consumer judgments. Two important characteristics of our research paradigm are especially relevant to digital contexts: (1) judgements made by the consumers are neither observable nor subject to others’ disapproval; and (2) consensus is not observable or verifiable. Nevertheless, in six main studies and one auxiliary study (
Web Appendix
), we found that high knowledge salience of others also evaluating reduced judgment extremity. Judgment extremity is quantified by the degree or strength of an evaluation or numeric estimate about a judgment target. This effect was driven by consumers’ tendency to predict a moderate consensus and to conform to this perception. Implications for marketing research and crowdsourcing are discussed.
Researchers attempting to survey refugees over time face methodological issues because of the transient nature of the target population. In this article, we examine whether applying smartphone ...technology could alleviate these issues. We interviewed 529 refugees and afterward invited them to four follow-up mobile web surveys and to install a research app for passive mobile data collection. Our main findings are as follows: First, participation in mobile web surveys declines rapidly and is rather selective with significant coverage and nonresponse biases. Second, we do not find any factor predicting types of smartphone ownership, and only low reading proficiency is significantly correlated with app nonparticipation. However, obtaining sufficiently large samples is challenging—only 5 percent of the eligible refugees installed our app. Third, offering a 30 Euro incentive leads to a statistically insignificant increase in participation in passive mobile data collection.
Central cancer registries are often used to survey population-based samples of cancer survivors. These surveys are typically administered via paper or telephone. In most populations, web surveys ...obtain much lower response rates than paper surveys. This study assessed the feasibility of web surveys for collecting patient-reported outcomes via a central cancer registry.
Potential participants were sampled from Utah Cancer Registry records. Sample members were randomly assigned to receive a web or paper survey, and then randomized to either receive or not receive an informative brochure describing the cancer registry. We calculated adjusted risk ratios with 95% confidence intervals to compare response likelihood and the demographic profile of respondents across study arms.
The web survey response rate (43.2%) was lower than the paper survey (50.4%), but this difference was not statistically significant (adjusted risk ratio = 0.88, 95% confidence interval = 0.72, 1.07). The brochure also did not significantly influence the proportion responding (adjusted risk ratio = 1.03, 95% confidence interval = 0.85, 1.25). There were few differences in the demographic profiles of respondents across the survey modes. Older age increased likelihood of response to a paper questionnaire but not a web questionnaire.
Web surveys of cancer survivors are feasible without significantly influencing response rates, but providing a paper response option may be advisable particularly when surveying older individuals. Further examination of the varying effects of brochure enclosures across different survey modes is warranted.
Web-Based Research in Psychology Reips, Ulf-Dietrich
Zeitschrift für Psychologie,
12/2021, Letnik:
229, Številka:
4
Journal Article
Recenzirano
Odprti dostop
The present article reviews web-based research in psychology. It captures principles,
learnings, and trends in several types of web-based research that show similar developments related to web
...technology and its major shifts (e.g., appearance of search engines, browser wars, deep web,
commercialization, web services, HTML5...) as well as distinct challenges. The types of web-based
research discussed are web surveys and questionnaire research, web-based tests, web experiments, Mobile
Experience Sampling, and non-reactive web research, including big data. A number of web-based methods are
presented and discussed that turned out to become important in research methodology. These are
one-item-one-screen design, seriousness check, instruction manipulation and other attention checks, multiple
site entry technique, subsampling technique, warm-up technique, and web-based measurement. Pitfalls and best
practices are described then, especially regarding dropout and other non-response, recruitment of
participants, and interaction between technology and psychological factors. The review concludes with a
discussion of important concepts that have developed over 25 years and an outlook on future developments in
web-based research.
The objective of this study was to compare results of using web-based and mail (postal) Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) data collection protocols.
Patients ...who had been hospitalized in a New England Hospital were surveyed about their hospital experience. Patients who provided email addresses were randomized to 1 of 3 data collection protocols: web-alone, web with postal mail follow-up, and postal mail only. Those who did not provide email addresses were surveyed using postal mail only. Analyses compared response rates, respondent characteristics, and patient-reported experiences.
For an 8-week period, patients were discharged from the study hospital to home.
Measures included response rates, characteristics of respondents, 6 composite measures of their patient experiences, and 2 ratings of the hospital.
Response rates were significantly lower for the web-only protocol than the mail or combined protocols, and those who had not provided email addresses had lower response rates. Those over 65 were more likely than others to respond to all protocols, especially for the mail-only protocols. Respondents without email addresses were older, less educated, and reported worse health than those who provided email addresses. After adjusting for respondent differences, those in the combined protocol differed significantly from the mail (postal) only respondents on 2 measures of patient experience; those in the web-only protocol differed on one. Those not providing an email address differed from those who did on one measure.
If web-based protocols are used for HCAHPS surveys, adjustments for a mode of data collection are needed to make results comparable.
Knowledge questions frequently are used in survey research to measure respondents’ topic-related cognitive ability and memory. However, in self-administered surveys, respondents can search external ...sources for additional information to answer a knowledge question correctly. In this case, the knowledge question measures accessible and procedural memory. Depending on what the knowledge question aims at, the validity of this measure is limited. Thus, in this study, we conducted three experiments using a web survey to investigate the effects of task difficulty, respondents’ ability, and respondents’ motivation on the likelihood of searching external sources for additional information as a form of over-optimizing response behavior when answering knowledge questions. We found that the respondents who are highly educated and more interested in a survey are more likely to invest additional efforts to answer knowledge questions correctly. Most importantly, our data showed that for these respondents, a more difficult question design further increases the likelihood of over-optimizing response behavior.