We carried out an experiment that compared telephone and Web versions of a questionnaire that assessed attitudes toward science and knowledge of basic scientific facts. Members of a random digit dial ...(RDD) sample were initially contacted by telephone and answered a few screening questions, including one that asked whether they had Internet access. Those with Internet access were randomly assigned to complete either a Web version of the questionnaire or a computer-assisted telephone interview. There were four main findings. First, although we offered cases assigned to the Web survey a larger incentive, fewer of them completed the online questionnaire; almost all those who were assigned to the telephone condition completed the interview. The two samples of Web users nonetheless had similar demographic characteristics. Second, the Web survey produced less item nonresponse than the telephone survey. The Web questionnaire prompted respondents when they left an item blank, whereas the telephone interviewers accepted “no opinion” answers without probing them. Third, Web respondents gave less differentiated answers to batteries of attitude items than their telephone counterparts. The Web questionnaire presented these items in a grid that may have made their similarity more salient. Finally, Web respondents took longer to complete the knowledge items, particularly those requiring open-ended answers, than the telephone respondents, and Web respondents answered a higher percentage of them correctly. These differences between Web and telephone surveys probably reflect both inherent differences between the two modes and incidental features of our implementation of the survey. The mode differences also vary by item type and by respondent age.
Some researchers have argued that respondents give more extreme answers to questions involving response scales over the telephone than in other modes of data collection, but others have argued that ...telephone respondents give more positive answers. We conducted a meta-analysis based on 18 experimental comparisons between telephone interviews and another mode of data collection. Our analysis showed that telephone respondents are significantly more likely than respondents in other modes to give extremely positive answers (for example, the highest satisfaction ratings in a customer satisfaction survey) but are not more likely to give extremely negative responses. This tendency to give highly positive ratings appears to be related to the presence of an interviewer, and it may reflect respondents' reluctance to express bad news, a tendency some social psychologists have dubbed the MUM effect. Adapted from the source document.
Important theoretical questions in survey research over the past 50 years have been: How does bringing in late or reluctant respondents affect total survey error? Does the effort and expense of ...obtaining interviews from difficult-to-contact or reluctant respondents significantly decrease the nonresponse error of survey estimates? Or do these late respondents introduce enough measurement error to offset any reductions in nonresponse bias? This study attempts to address these questions by examining nonresponse and data quality in two national household surveys: the Current Population Survey (CPS) and the American Time Use Survey (ATUS). Response propensity models were developed for each survey, and data quality in each survey was assessed by a variety of indirect indicators of response error, for example, item-missing-data rates, round value reports, and interview-reinterview response inconsistencies. The principal analyses investigated the relationship between response propensity and the dataquality indicators in each survey, and examined the effects of potential common causal factors when there was evidence of covariation. Although the strength of the relationship varied by indicator and survey, data quality decreased for some indicators as the probability of nonresponse increased. Therefore, the direct implication for survey managers is that efforts to reduce nonresponse can lead to poorer-quality data. Moreover, these effects remain even after attempts to control for potential common causal factors.
The Framing of the Record Linkage Consent Question Kreuter, Frauke; Sakshaug, Joseph W; Tourangeau, Roger
International journal of public opinion research,
03/2016, Letnik:
28, Številka:
1
Journal Article
Recenzirano
Many surveys around the world ask respondents for consent to link their sample survey records to corresponding records collected from administrative sources, including income and employment records, ...medical history and billing records, and social benefit and tax records. Declines in linkage consent rates have raised concerns that the consenting sample may no longer be representative of the survey's target population and may therefore undermine the accuracy of any inferences drawn from linked survey and administrative data.
Introduction: New Challenges to Social Measurement MASSEY, DOUGLAS S.; TOURANGEAU, ROGER
The Annals of the American Academy of Political and Social Science,
01/2013, Letnik:
645, Številka:
1
Journal Article
Recenzirano
Surveys are the principal source of data not only for social science, but for consumer research, political polling, and federal statistics. In response to social and technological trends, rates of ...survey nonresponse have risen markedly in recent years, prompting observers to worry about the continued validity of surveys as a tool for data gathering. This introductory article sets the stage for the comprehensive review that follows of the causes and consequences of nonresponse for survey data and the approaches that have been developed to address it.
Survey researchers have long speculated that there may be a link between nonresponse and measurement error—that is, people likely to become nonrespondents to a survey are also likely to make poor ...reporters if they do take part. Still, there is surprisingly little evidence of such a link. It could be that nonresponse is generally the product of one set of factors and reporting errors, the product of an unrelated set, or both nonresponse and reporting errors may be itemspecific so that no general relationship between the two is likely to emerge. Our study examined a situation in which we thought there would be a link between response propensities and the propensity to give inaccurate answers. We asked samples of voters and nonvoters to take part in a survey that included items about voting. Past research shows that nonvoters misreport that fact and that they are less likely than voters in general to take part in surveys. We thought we could heighten the differences between voters and nonvoters in both response rates and levels of misreporting if we characterized the survey as being about politics. However, only nonresponse biases were larger when the topic of the survey was described as political, and this difference was only marginally significant. These two ways of framing the study had even smaller effects on estimates derived from other items in the questionnaire.The overall biases in estimates derived from the voting items are very substantial, and both nonresponse and measurement error contribute to them.
Some researchers have argued that respondents give more extreme answers to questions involving response scales over the telephone than in other modes of data collection, but others have argued that ...telephone respondents give more positive answers. We conducted a meta-analysis based on 18 experimental comparisons between telephone interviews and another mode of data collection. Our analysis showed that telephone respondents are significantly more likely than respondents in other modes to give extremely positive answers (for example, the highest satisfaction ratings in a customer satisfaction survey) but are not more likely to give extremely negative responses. This tendency to give highly positive ratings appears to be related to the presence of an interviewer, and it may reflect respondents' reluctance to express bad news, a tendency some social psychologists have dubbed the MUM effect.
Survey researchers since Cannell have worried that respondents may take various shortcuts to reduce the effort needed to complete a survey. The evidence for such shortcuts is often indirect. For ...instance, preferences for earlier versus later response options have been interpreted as evidence that respondents do not read beyond the first few options. This is really only a hypothesis, however, that is not supported by direct evidence regarding the allocation of respondent attention. In the current study, we used a new method to more directly observe what respondents do and do not look at by recording their eye movements while they answered questions in a Web survey. The eye-tracking data indicate that respondents do in fact spend more time looking at the first few options in a list of response options than those at the end of the list; this helps explain their tendency to select the options presented first regardless of their content. In addition, the eye-tracking data reveal that respondents are reluctant to invest effort in reading definitions of survey concepts that are only a mouse click away or paying attention to initially hidden response options. It is clear from the eye-tracking data that some respondents are more prone to these and other cognitive shortcuts than others, providing relatively direct evidence for what had been suspected based on more conventional measures.