Surveying Clinicians by Web Dykema, Jennifer; Jones, Nathan R.; Piché, Tara ...
Evaluation & the health professions,
09/2013, Letnik:
36, Številka:
3
Journal Article
Recenzirano
The versatility, speed, and reduced costs with which web surveys can be conducted with clinicians are often offset by low response rates. Drawing on best practices and general recommendations in the ...literature, we provide an evidence-based overview of methods for conducting online surveys with providers. We highlight important advantages and disadvantages of conducting provider surveys online and include a review of differences in response rates between web and mail surveys of clinicians. When administered online, design-based features affect rates of survey participation and data quality. We examine features likely to have an impact including sample frames, incentives, contacts (type, timing, and content), mixed-mode approaches, and questionnaire length. We make several recommendations regarding optimal web-based designs, but more empirical research is needed, particularly with regard to identifying which combinations of incentive and contact approaches yield the highest response rates and are the most cost-effective.
Many studies rely on traditional web survey methods in which all contacts with sample members are through email and the questionnaire is administered exclusively online. Because it is difficult to ...effectively administer prepaid incentives via email, researchers frequently employ lotteries or prize draws as incentives even though their influence on survey participation is small. The current study examines whether a prize draw is more effective if it is divided into a few larger amounts versus several smaller amounts and compares prize draws to a small but guaranteed postpaid incentive. Data are from the 2019 Campus Climate Survey on Sexual Assault and Sexual Misconduct. Sample members include 38,434 undergraduate and graduate students at a large Midwestern university who were randomly assigned to receive: a guaranteed $5 Amazon gift card; entry into a high-payout drawing for one of four $500 prizes; or entry into a low-payout drawing for one of twenty $100 prizes. Results indicate the guaranteed incentive increased response rates, with no difference between the prize draws. While results from various data quality outcomes show the guaranteed incentive reduced break-off rates and the high-payout drawing increased item nonresponse, there were no differences across incentive conditions in rates of speeding, reporting of sensitive data, straightlining, or sample representativeness. As expected, the prize draws had much lower overall and per complete costs.
The number of web-based E-epidemiologic studies using online recruitment methods is increasing. However, the optimal online recruitment method in terms of maximizing recruitment rates is still ...unknown. Our aim was to compare the recruitment rates of three online recruitment methods and to describe how these rates differ according to individual's socioeconomic and demographic factors.
A total of 2394 members of the 1993 Pelotas birth cohort that provided an e-mail address, a Facebook name, and a WhatsApp number during a face-to-face follow-up were randomly allocated to be recruited by e-mail, Facebook or WhatsApp (798 individuals per method). This was a parallel randomised trial applying a block randomisation (block size = 3). Between January and February 2018, we sent messages inviting them to register into the web-based coortesnaweb platform. Recruitment rates were calculated for each method, and stratified according to the individual's socioeconomic and demographic characteristics. We also analysed absolute and relative inequalities on recruitment according to schooling and socioeconomic position.
Out of the 2394 individuals analysed, 642 registered into the platform. The overall recruitment rate was 26.8%. Recruitment rates for women were almost 10 percentage points higher compared to men. Facebook was the most effective recruitment method, as 30.6% of those invited through the social network were recruited. Recruitment rates of e-mail and WhatsApp were similar (recruitment rate = 24.9%). E-mail and Facebook were the most effective recruitment methods to invite highly educated and wealthier individuals. However, sending e-mails to recruit individuals also reflected in the highest inequalities according to schooling and socioeconomic position. In contrast, the lowest inequalities according to socioeconomic position were observed using Facebook.
Facebook was the most effective online recruitment method, also achieving the most equitable sample in terms of schooling and socioeconomic position. The effectiveness of online recruitment methods depends on the characteristics of the sample. It is important to know the profile of the target sample in order to decide which online recruitment method to use.
Brazilian Registry of Clinical Trials, identifier: RBR-3dv7gc , retrospectively registered in 10 April 2018.
Survey research aims to collect robust and reliable data from respondents. However, despite researchers’ efforts in designing questionnaires, survey instruments may be imperfect, and question ...structure not as clear as could be, thus creating a burden for respondents. If it were possible to detect such problems, this knowledge could be used to predict problems in a questionnaire during pretesting, inform real-time interventions through responsive questionnaire design, or to indicate and correct measurement error after the fact. Previous research has used paradata, specifically response times, to detect difficulties and help improve user experience and data quality. Today, richer data sources are available, for example, movements respondents make with their mouse, as an additional detailed indicator for the respondent–survey interaction. This article uses machine learning techniques to explore the predictive value of mouse-tracking data regarding a question’s difficulty. We use data from a survey on respondents’ employment history and demographic information, in which we experimentally manipulate the difficulty of several questions. Using measures derived from mouse movements, we predict whether respondents have answered the easy or difficult version of a question, using and comparing several state-of-the-art supervised learning methods. We have also developed a personalization method that adjusts for respondents’ baseline mouse behavior and evaluate its performance. For all three manipulated survey questions, we find that including the full set of mouse movement measures and accounting for individual differences in these measures improve prediction performance over response-time-only models.
The grid question refers to a table layout for a series of survey question items (i.e., sub-questions) with the same introduction and identical response categories. Because of their complexity, ...concerns have already been raised about grids in web surveys on PCs, and these concerns have heightened regarding mobile devices. Some studies suggest decomposing grids into item-by-item layouts, while others argue that this is unnecessary. To address this challenge, this paper provides a comprehensive evaluation of the grid layout and four item-by-item alternatives, using 10 response quality indicators and 20 survey estimates. Results from the experimental web survey (n = 4644) suggest that item-by-item layouts (unfolding or scrolling) should be used instead of grids, not only on mobile devices but also on PCs. While the former justifies the already increasing use of item-by-item layouts on mobile devices in survey practice, the latter implies that the prevailing routine of using grids on PCs should be reconsidered.
The day of the week on which sample members are invited to participate in a web survey might influence propensity to respond, or to respond promptly (within two days from the invitation). This effect ...could differ between sample members with different characteristics. We explore such effects using a large-scale experiment implemented on the Understanding Society Innovation Panel, in which some people received an invitation on a Monday and some on a Friday. Specifically, we test whether any effect of the invitation day is moderated by economic activity status (which may result in a different organisation of time by day of the week), previous participation in the panel, or whether the invitation was sent only by post or by post and email simultaneously. Overall, we do not find any effect of day of invitation in survey participation or in prompt participation. However, sample members who provided an email address, and, thus, were contacted by email in addition to postal letter, are less likely to participate if invited on Friday (email reminders: Sunday and Tuesday) as opposed to Monday (email reminders: Wednesday and Friday). Given that no difference between the two protocols is found for prompt response, the effect seems to be due to the day of mailing of reminders. With respect to sample members' economic activity status, those not having a job and the retired are less likely to participate when invited on a Friday; this result holds also for prompt participation, but only for retired respondents. Also, sample members who work long hours are less likely to participate when invited on a Friday; however, no effect is found for prompt response.
Many colleges and universities conduct web-based campus climate surveys to understand the prevalence and nature of sexual assault among their students. When designing and fielding a web survey to ...measure a sensitive topic like sexual assault, methodological decisions, including the length of the field period and the use or amount of an incentive, can affect the representativeness of the respondent sample leading to biased or imprecise estimates. This study uses data from the Campus Climate Survey Validation Study (CCSVS) to assess how the interaction between field period length and survey incentive amount affects nonresponse, sample representativeness, and the precision of survey estimates. Research suggests that using robust incentives gives potential survey respondents a reason to complete the survey beyond their intrinsic motivation to do so. Likewise, extending the field period gives more time to people who may be less intrinsically motivated to complete the survey. Both serve to increase sample size and representativeness, minimize bias, and improve estimate precision. Schools, however, sometimes lack the time and/or resources for both a robust incentive and a lengthy field period, and this study examines the extent to which the potential negative impacts of not using one can be mitigated by the presence of the other. Findings indicate that target response rates can be achieved using a smaller incentive if the field period is lengthy but, even with a lengthy field period, the use of a smaller incentive can result in biased estimates due to a lack of representativeness. Conversely, when a robust incentive is used and weights are developed to adjust for nonresponse, a shorter field period will not have a significant impact on point estimates, but the estimates will be less precise due to fewer respondents participating in the survey.
Abstract
Survey data collection costs have risen to a point where many survey researchers and polling companies are abandoning large, expensive probability-based samples in favor of less expensive ...nonprobability samples. The empirical literature suggests this strategy may be suboptimal for multiple reasons, among them that probability samples tend to outperform nonprobability samples on accuracy when assessed against population benchmarks. However, nonprobability samples are often preferred due to convenience and costs. Instead of forgoing probability sampling entirely, we propose a method of combining both probability and nonprobability samples in a way that exploits their strengths to overcome their weaknesses within a Bayesian inferential framework. By using simulated data, we evaluate supplementing inferences based on small probability samples with prior distributions derived from nonprobability data. We demonstrate that informative priors based on nonprobability data can lead to reductions in variances and mean squared errors for linear model coefficients. The method is also illustrated with actual probability and nonprobability survey data. A discussion of these findings, their implications for survey practice, and possible research extensions are provided in conclusion.
Surveys have been used as main tool of data collection in many areas of research and for many years. However, the environment is changing increasingly quickly, creating new challenges and ...opportunities. This article argues that, in this new context, human memory limitations lead to inaccurate results when using surveys in order to study objective online behavior: People cannot recall everything they did. It therefore investigates the possibility of using, in addition to survey data, passive data from a tracking application (called a “meter”) installed on participants’ devices to register their online behavior. After evaluating the extent of some of the main drawbacks linked to passive data collection with a case study (Netquest metered panel in Spain), this article shows that the data from the web survey and the meter lead to very different results about the online behavior of the same sample of respondents, showing the need to combine several sources of data collection in the future.
Web surveys permit researchers to use graphic or symbolic elements alongside the text of response options to help respondents process the categories. Smiley faces are one example used to communicate ...positive and negative domains. How respondents visually process these smiley faces, including whether they detract from the question’s text, is understudied. We report the results of two eye-tracking experiments in which satisfaction questions were asked with and without smiley faces. Respondents to the questions with smiley faces spent less time reading the question stem and response option text than respondents to the questions without smiley faces, but the response distributions did not differ by version. We also find support that lower literacy respondents rely more on the smiley faces than higher literacy respondents.