This paper draws on individual-level data from the National Study of Family Growth (NSFG) to identify likely underreporters of abortion and miscarriage and examine their characteristics. The NSFG ...asks about abortion and miscarriage twice, once in the computer-assisted personal interviewing (CAPI) part of the questionnaire and the other in the audio computer-assisted self-interviewing (ACASI) part. We used two different methods to identify likely underreporters of abortion and miscarriage: direct comparison of answers obtained from CAPI and ACASI and latent class models. The two methods produce very similar results. Although miscarriages are just as prone to underreporting as abortions, characteristics of women underreporting abortion differ somewhat from those misreporting miscarriages. Underreporters of abortions tended to be older, poorer, less likely to be Hispanic or Black, and more likely to have no religion. They also reported more traditional attitudes toward sexual behavior. By contrast, underreporters of miscarriage also tended to be older, poorer, and more likely to be Hispanic or Black, but were also more likely to have children in the household, had fewer pregnancies, and held less traditional attitudes toward marriage.
Celotno besedilo
Dostopno za:
DOBA, IZUM, KILJ, NUK, PILJ, PNG, SAZU, SIK, UILJ, UKNU, UL, UM, UPUK
IntroductionThis paper reports a study done to estimate the reliability and validity of answers to the Youth and Adult questionnaires of the Population Assessment of Tobacco and Health (PATH) ...Study.Methods407 adults and 117 youth respondents completed the wave 4 (2016–2017) PATH Study interview twice, 6–24 days apart. The reinterview data were used to estimate the reliability of answers to the questionnaire. Kappa statistics, gross discrepancy rates and correlations between answers to the initial interview and the reinterview were used to measure reliability. We examined every item in the questionnaire for which there were at least 100 observations. After the reinterview, most respondents provided a saliva sample that allowed us to assess the accuracy of their answers to the tobacco use questions.ResultsThere was generally a very high level of agreement between answers in the interview and reinterview. On the key current tobacco use items, the average kappa (the agreement rate adjusted for chance agreement) was 0.79 for adult respondents (age 18 or older). Youth respondents exhibited equally high levels of agreement across interviews. The items on current tobacco use also exhibited high levels of agreement with saliva test results (kappa=0.72). Rating scale items showed lower levels of exact agreement across interviews but the answers were generally within one scale point or category.ConclusionsThe PATH Study questions were developed using a careful protocol and the results indicate the answers provide reliable and valid information about tobacco use.
We live in an era when people do not seem to want to do surveys anymore. Response rates have been falling since the 1970s, and even high-quality telephone surveys now often have response rates in the ...low one figures. The various countermeasures survey researchers have taken to combat this trend-particularly making more attempts to contact sample members-have sharply raised data collection costs. For example, the per household cost of conducting the US Decennial Census was seven times greater (in constant dollars) in 2010 than in 1970.3 The twin phenomena of falling survey response rates and increasing survey costs are well documented but poorly understood. It is clear that people are much more likely to refuse to take part in surveys now than they were in the past, although it is not clear why. And the trend appears to be affecting the entire developed world, not just the United States. Because of the increased risk of bias and the higher costs of doing surveys, some researchers have advocated turning to nonprobability samples, such as volunteer Web panels, or abandoning surveys altogether in favor of administrative data. At the same time, the demand for accurate information on a range of topics has only increased. Despite the unfavorable environment for surveys, some researchers have taken up the challenge of conducting studies of hard-to-survey populations, such as small ethnic minorities or itinerant populations, like the Irish Travelers. The description of their study at the National Center for Health Statistics by Galinsky et al. (p. 1384) is an extremely useful addition to the literature on hard-to-survey populations. Populations can be difficult to survey for a variety of reasons. They can be difficult to sample (e.g., because population members are rare or highly mobile); they can be hard to identify (e.g., because members are reluctant to admit to being part of a stigmatized population); they can be hard to locate (e.g., because members move frequently) or to contact (e.g., because gatekeepers bar access to members of the population); they may be hard to persuade to take part (e.g., because members mistrust the authorities); or they may be hard to interview (e.g., because of language barriers). The primary challenges for the Native Hawaiian and Pacific Islander (NHPI) National Health Interview Survey were the rarity of this population (which comprises less than 1 in 200 Americans) and their potential mistrust of the researchers.
The Design of Grids in Web Surveys Couper, Mick P.; Tourangeau, Roger; Conrad, Frederick G. ...
Social science computer review,
06/2013, Letnik:
31, Številka:
3
Journal Article
Recenzirano
Odprti dostop
Grid or matrix questions are associated with a number of problems in web surveys. In this article, we present results from two experiments testing the design of grid questions to reduce breakoffs, ...missing data, and satisficing. The first examines dynamic elements to help guide respondent through the grid, and on splitting a larger grid into component pieces. The second manipulates the visual complexity of the grid and on simplifying the grid. We find that using dynamic feedback to guide respondents through a multiquestion grid helps reduce missing data. Splitting the grids into component questions further reduces missing data and motivated underreporting. The visual complexity of the grid appeared to have little effect on performance.
It is well known that some survey respondents reduce the effort they invest in answering questions by taking mental shortcuts – survey satisficing. This is a concern because such shortcuts can reduce ...the quality of responses and, potentially, the accuracy of survey estimates. This article explores “speeding,” an extreme type of satisficing, which we define as answering so quickly that respondents could not have given much, if any, thought to their answers. To reduce speeding among online respondents we implemented an interactive prompting technique. When respondents answered faster than a minimal response time threshold, they received a message encouraging them to answer carefully and take their time. Across six web survey experiments, this prompting technique reduced speeding on subsequent questions compared to a no prompt control. Prompting slowed response times whether the speeding that triggered the prompt occurred early or late in the questionnaire, in the first or later waves of a longitudinal survey, among respondents recruited from non-probability or probability panels, or whether the prompt was delivered on only the first or on all speeding episodes. In addition to reducing speeding, the prompts increased response accuracy on simple arithmetic questions for a key subgroup. Prompting also reduced later straightlining in one experiment, suggesting the benefits may generalize to other types of mental shortcuts. Although the prompting could have annoyed respondents, it was not accompanied by a noticeable increase in breakoffs. As an alternative technique, respondents in one experiment were asked to explicitly commit to responding carefully. This global approach complemented the more local, interactive prompting technique on several measures. Taken together, these results suggest that interactive interventions of this sort may be useful for increasing respondents’ conscientiousness in online questionnaires, even though these questionnaires are self-administered.
Purpose
This paper aims to examine the cognitive processes involved in answering survey questions. It also briefly discusses how the cognitive viewpoint has been challenged by other approaches (such ...as conversational analysis).
Design/methodology/approach
The paper reviews the major components of the response process and summarizes work examining how each of these components can contribute to measurement errors in surveys.
Findings
The Cognitive Aspects of Survey Methodology (CASM) model of the survey response process is still generating useful research, but both the satisficing model and the conversational approach provide useful supplements, emphasizing motivational and social sources of error neglected in the CASM approach.
Originality/value
The paper provides an introduction to the cognitive processes underlying survey responses and how these processes can explain why survey responses may be inaccurate.
For many household surveys in the United States, responses rates have been steadily declining for at least the past two decades. A similar decline in survey response can be observed in all wealthy ...countries. Efforts to raise response rates have used such strategies as monetary incentives or repeated attempts to contact sample members and obtain completed interviews, but these strategies increase the costs of surveys. This review addresses the core issues regarding survey nonresponse. It considers why response rates are declining and what that means for the accuracy of survey results. These trends are of particular concern for the social science community, which is heavily invested in obtaining information from household surveys. The evidence to date makes it apparent that current trends in nonresponse, if not arrested, threaten to undermine the potential of household surveys to elicit information that assists in understanding social and economic issues. The trends also threaten to weaken the validity of inferences drawn from estimates based on those surveys. High nonresponse rates create the potential or risk for bias in estimates and affect survey design, data collection, estimation, and analysis.
The survey community is painfully aware of these trends and has responded aggressively to these threats. The interview modes employed by surveys in the public and private sectors have proliferated as new technologies and methods have emerged and matured. To the traditional trio of mail, telephone, and face-to-face surveys have been added interactive voice response (IVR), audio computer-assisted self-interviewing (ACASI), web surveys, and a number of hybrid methods. Similarly, a growing research agenda has emerged in the past decade or so focused on seeking solutions to various aspects of the problem of survey nonresponse; the potential solutions that have been considered range from better training and deployment of interviewers to more use of incentives, better use of the information collected in the data collection, and increased use of auxiliary information from other sources in survey design and data collection. Nonresponse in Social Science Surveys: A Research Agenda also documents the increased use of information collected in the survey process in nonresponse adjustment.
Abstract
Although most survey researchers agree that reliability is a critical requirement for survey data, there have not been many efforts to assess the reliability of responses in national ...surveys. In addition, there are quite different approaches to studying the reliability of survey responses. In the first section of the Lecture, I contrast a psychological theory of over-time consistency with three statistical models that use reinterview data, multi-trait multi-method experiments, and three-wave panel data to estimate reliability. The more sophisticated statistical models reflect concerns about memory effects and the impact of method factors in reinterview studies. In the following section of the Lecture, I examine some of the major findings from the literature on reliability. Despite the differences across methods for exploring reliability, the findings mostly converge, identifying similar respondent and question characteristics as major determinants of reliability. The next section of the paper looks at the correlations among estimates of reliability derived from the different methods; it finds some support for the validity of the measures from traditional reinterview studies. The empirical claims motivating the more sophisticated methods for estimating reliability are not strongly supported in the literature. Reliability is, in my judgment, a neglected topic among survey researchers, and I hope the Lecture spurs further studies of the reliability of survey questions.
We carried out two experiments to investigate how the shading of the options in a response scale affected the answers to the survey questions. The experiments were embedded in two web surveys, and ...they varied whether the two ends of the scale were represented by shades of the same or different hues. The experiments also varied the numerical labels for the scale points and examined responses to both unipolar scales (assessing frequency) and bipolar scales (assessing favorability). We predicted that the use of different hues would affect how respondents viewed the low end of the scale, making responses to that end seem more extreme than when the two ends were shades of the same hue. This hypothesis was based on the notion that respondents use various interpretive heuristics in assigning meaning to the visual features of survey questions. One such cue is visual similarity. When two options are similar in appearance, respondents will see them as conceptually closer than when they are dissimilar in appearance. The results were generally consistent with this prediction. When the end points of the scale were shaded in different hues, the responses tended to shift toward the high end of the scale, as compared to scales in which both ends of the scale were shaded in the same hue. Though noticeable, this shift was less extreme than the similar shift produced when negative numbers were used to label one end of the scale; moreover, the effect of color was eliminated when each scale point had a verbal label. These findings suggest that respondents have difficulty using scales and pay attention even to incidental features of the response scales in interpreting the scale points.