Surveys reflect societal change in a way that few other research tools do. Over the past two decades, three developments have transformed surveys. First, survey organizations have adopted new methods ...for selecting telephone samples; these new methods were made possible by the creation of large databases that include all listed telephone numbers in the United States. A second development has been the widespread decline in response rates for all types of surveys. In the face of this problem, survey researchers have developed new theories of nonresponse that build on the persuasion literature in social psychology. Finally, surveys have adopted many new methods of data collection; the new modes reflect technological developments in computing and the emergence of the Internet. Research has spawned several theories that examine how characteristics of the data collection method shape the answers obtained. Rapid change in survey methods is likely to continue in the coming years.
Celotno besedilo
Dostopno za:
DOBA, IZUM, KILJ, NUK, ODKLJ, PILJ, PNG, SAZU, SIK, UILJ, UKNU, UL, UM, UPUK
12.
Web Surveys by Smartphones and Tablets Tourangeau, Roger; Sun Hanyu; Ting, Yan ...
Social science computer review,
10/2018, Letnik:
36, Številka:
5
Journal Article
Recenzirano
Does completing a web survey on a smartphone or tablet computer reduce the quality of the data obtained compared to completing the survey on a laptop computer? This is an important question, since a ...growing proportion of web surveys are done on smartphones and tablets. Several earlier studies have attempted to gauge the effects of the switch from personal computers to mobile devices on data quality. We carried out a field experiment in eight counties around the United States that compared responses obtained by smartphones, tablets, and laptop computers. We examined a range of data quality measures including completion times, rates of missing data, straightlining, and the reliability and validity of scale responses. A unique feature of our study design is that it minimized selection effects; we provided the randomly determined device on which respondents completed the survey after they agreed to take part. As a result, respondents may have been using a device (e.g., a smartphone) for the first time. However, like many of the prior studies examining mobile devices, we find few effects of the type of device on data quality.
How Errors Cumulate: Two Examples Tourangeau, Roger
Journal of survey statistics and methodology,
06/2020, Letnik:
8, Številka:
3
Journal Article
Abstract
This article examines the relationship among different types of nonobservation errors (all of which affect estimates from nonprobability internet samples) and between nonresponse and ...measurement errors. Both are examples of how different error sources can interact. Estimates from nonprobability samples seem to have more total error than estimates from probability samples, even ones with very low response rates. This finding suggests that the combination of coverage, selection, and nonresponse errors has greater cumulative effects than nonresponse error alone. The probabilities of having internet access, joining an internet panel, and responding to a particular survey request are probably correlated and, as a result, may lead to greater covariances with survey variables than response propensities alone; the biases accentuate one another. With nonresponse and measurement error, the two sources seem more or less uncorrelated, with one exception—those most prone to social desirability bias (those in the undesirable categories) are also less likely to respond. In addition, the propensity for unit nonresponse seems to be related to item nonresponse.
Psychologists have worried about the distortions introduced into standardized personality measures by social desirability bias. Survey researchers have had similar concerns about the accuracy of ...survey reports about such topics as illicit drug use, abortion, and sexual behavior. The article reviews the research done by survey methodologists on reporting errors in surveys on sensitive topics, noting parallels and differences from the psychological literature on social desirability. The findings from the survey studies suggest that misreporting about sensitive topics is quite common and that it is largely situational. The extent of misreporting depends on whether the respondent has anything embarrassing to report and on design features of the survey. The survey evidence also indicates that misreporting on sensitive topics is a more or less motivated process in which respondents edit the information they report to avoid embarrassing themselves in the presence of an interviewer or to avoid repercussions from third parties.
This study compared three methods of collecting survey data about sexual behaviors and other sensitive topics: computer-assisted personal interviewing (CAPI), computer-assisted self-administered ...interviewing (CASI), and audio computer-assisted self-administered interviewing (ACASI). Interviews were conducted with an area probability sample of more than 300 adults in Cook County, Illinois. The experiment also compared open and closed questions about the number of sex partners and varied the context in which the sex partner itemswere embeded. The three mode groups did not differ in response rates, but the mode of data collection did affect the level of reporting of sensitive behaviors: both forms of self-administration tended to reduce the disparity between men and women in the number of sex partners reported. Self-administration, especially via ACASI, also increased the proportion of respondents admitting that they had used illicit drugs. In addition, when the closed answer options emphasized the low end of the distribution, fewer sex partners were reported than when the options emphasized the high end of the distribution; responses to the open-ended versions of the sex partner items generally fell between responses to the two closed versions.
Although it is well established that self-administered questionnaires tend to yield fewer reports in the socially desirable direction than do interviewer-administered questionnaires, less is known ...about whether different modes of self-administration vary in their effects on socially desirable responding. In addition, most mode comparison studies lack validation data and thus cannot separate the effects of differential nonresponse bias from the effects of differences in measurement error. This paper uses survey and record data to examine mode effects on the reporting of potentially sensitive information by a sample of recent university graduates. Respondents were randomly assigned to one of three modes of data collection-conventional computer-assisted telephone interviewing (CATI), interactive voice recognition (IVR), and the Web-and were asked about both desirable and undesirable attributes of their academic experiences. University records were used to evaluate the accuracy of the answers and to examine differences in nonresponse bias by mode. Web administration increased the level of reporting of sensitive information and reporting accuracy relative to conventional CATI, with IVR intermediate between the other two modes. Both mode of data collection and the actual status of the respondent influenced whether respondents found an item sensitive.
This paper draws on individual-level data from the National Study of Family Growth (NSFG) to identify likely underreporters of abortion and miscarriage and examine their characteristics. The NSFG ...asks about abortion and miscarriage twice, once in the computer-assisted personal interviewing (CAPI) part of the questionnaire and the other in the audio computer-assisted self-interviewing (ACASI) part. We used two different methods to identify likely underreporters of abortion and miscarriage: direct comparison of answers obtained from CAPI and ACASI and latent class models. The two methods produce very similar results. Although miscarriages are just as prone to underreporting as abortions, characteristics of women underreporting abortion differ somewhat from those misreporting miscarriages. Underreporters of abortions tended to be older, poorer, less likely to be Hispanic or Black, and more likely to have no religion. They also reported more traditional attitudes toward sexual behavior. By contrast, underreporters of miscarriage also tended to be older, poorer, and more likely to be Hispanic or Black, but were also more likely to have children in the household, had fewer pregnancies, and held less traditional attitudes toward marriage.
Celotno besedilo
Dostopno za:
DOBA, IZUM, KILJ, NUK, PILJ, PNG, SAZU, SIK, UILJ, UKNU, UL, UM, UPUK
Surveys undergird government statistical systems and social scientific research throughout the world. Rates of nonresponse are rising in cross-sectional surveys (those conducted during a fixed period ...of time and not repeated). Although this trend worries those concerned with the validity of survey data, there is no necessary relationship between the rate of nonresponse and the degree of bias. A high rate of nonresponse merely creates the potential for bias, but the degree of bias depends on how factors promoting nonresponse are related to variables of interest. Nonresponse can be reduced by offering financial incentives to respondents and by careful design before entering the field, creating a trade-off between cost and potential bias. When bias is suspected, it can be countered by weighting individual cases by the inverse of their response propensity. Response propensities are typically estimated using a logistic regression equation to predict the dichotomous outcome of survey participation as a function of auxiliary variables. The Multi-level Integrated Database Approach employs multiple databases to collect as much information as possible about the target sample during the initial sampling stage and at all possible levels of aggregation to maximize the accuracy of estimated response propensities.