Putting a value on injuries to natural assets Bishop, Richard C.; Boyle, Kevin J.; Carson, Richard T. ...
Science (American Association for the Advancement of Science),
04/2017, Letnik:
356, Številka:
6335
Journal Article
Recenzirano
Odprti dostop
When large-scale accidents cause catastrophic damage to natural or cultural resources, government and industry are faced with the challenge of assessing the extent of damages and the magnitude of ...restoration that is warranted. Although market transactions for privately owned assets provide information about how valuable they are to the people involved, the public services of natural assets are not exchanged on markets; thus, efforts to learn about people's values involve either untestable assumptions about how other things people do relate to these services or empirical estimates based on responses to stated-preference surveys. Valuation based on such surveys has been criticized because the respondents are not engaged in real transactions. Our research in the aftermath of the 2010 BP Deepwater Horizon oil spill addresses these criticisms using the first, nationally representative, stated-preference survey that tests whether responses are consistent with rational economic choices that are expected with real transactions. Our results confirm that the survey findings are consistent with economic decisions and would support investing at least $17.2 billion to prevent such injuries in the future to the Gulf of Mexico's natural resources.
This book provides a thorough review of the authors’ own research and other empirical evidence on Web surveys, taking a total survey error perspective. That perspective distinguishes several major ...sources of error in survey estimates, including sampling and coverage, nonresponse, and measurement issues. Because Web surveys are often used in combination with more traditional modes of data collection, the book also provides a model of the errors arising from mixed mode surveys. In its discussion of sampling and coverage, the book assesses the effectiveness of statistical procedures designed to remove selection and coverage biases from Web survey estimates. Several chapters are devoted to the measurement properties of Web surveys, examining basic design issues in Web surveys, the impact of the Web’s character as primarily a visual medium, the ability of Web surveys to permit interaction with the respondents, and the Web as a method for self-administering sensitive survey questions. An overall theme of the book is that Web surveys often offer relatively poor representation of the general population (sampling is difficult, coverage is imperfect, and response rates are often low), but relatively accurate measurement (allowing feedback to respondents and providing the benefits of self-administration). Although the book’s aims are primarily scientific, it does offer practical guidance to researchers where the evidence seems to support clear recommendations.
Every survey begins with a request to the sample members to take part. How that request is framed can have a variety of consequences, including its intended (positive) effect on the cooperation rate. ...Survey appeals tend to emphasize the benefits of participation, but there is reason to think that emphasizing the negative consequences of nonparticipation may sometimes be a more effective method of inducing cooperation. We carried out an experiment in which respondents in a random digit dialing (RDD) sample were asked to complete a second telephone interview. For approximately half of the respondents, we emphasized the benefits of their completing the follow-up interview; for the others, we emphasized the loss involved if they chose not to complete the follow-up. Based on Kahneman and Tversky's prospect theory, we predicted that the loss framing would be more effective than the gain framing. In line with our prediction, 87.5 percent of those who got the "loss" framing of the request completed the second interview versus 77.9 percent of those who got the "gain" framing. Multivariate models of the response rate to the second interview (conditional on completion of the first) suggest that the framing effect is fairly robust across subgroups of the sample.
Abstract
It is well established that taking part in earlier rounds of a panel survey can affect how respondents answer questions in later rounds. It is less clear, however, whether panel ...participation affects the quality of the data that respondents provide. We examined two panels to investigate how participation affects several indicators of data quality—including straightlining, item missing data, scale reliabilities, and differences in item functioning over time—and to test the hypotheses that it is less educated and older respondents who mainly account for any panel effects. The two panels were the GfK Knowledge Panel, in which some respondents completed up to four rounds measuring their attitudes toward terrorism and ways to counter terrorism, and the General Social Survey (GSS), in which respondents completed up to three rounds with an omnibus set of questions. The two panels differ sharply in terms of response rates and the level of prior survey experience of the respondents. Most of our comparisons are within-respondent, comparing the answers panel members gave in earlier rounds with those they gave in later rounds, but we also confirm the main results using between-subject comparisons. We find little evidence that respondents gave either better or worse data over time in either panel and little support for either the education or age hypotheses.
Web Surveys by Smartphones and Tablets Tourangeau, Roger; Sun, Hanyu; Yan, Ting ...
Social science computer review,
10/2018, Letnik:
36, Številka:
5
Journal Article
Recenzirano
Does completing a web survey on a smartphone or tablet computer reduce the quality of the data obtained compared to completing the survey on a laptop computer? This is an important question, since a ...growing proportion of web surveys are done on smartphones and tablets. Several earlier studies have attempted to gauge the effects of the switch from personal computers to mobile devices on data quality. We carried out a field experiment in eight counties around the United States that compared responses obtained by smartphones, tablets, and laptop computers. We examined a range of data quality measures including completion times, rates of missing data, straightlining, and the reliability and validity of scale responses. A unique feature of our study design is that it minimized selection effects; we provided the randomly determined device on which respondents completed the survey after they agreed to take part. As a result, respondents may have been using a device (e.g., a smartphone) for the first time. However, like many of the prior studies examining mobile devices, we find few effects of the type of device on data quality.
WEB SURVEYS BY SMARTPHONE AND TABLETS TOURANGEAU, ROGER; MAITLAND, AARON; RIVERO, GONZALO ...
Public opinion quarterly,
12/2017, Letnik:
81, Številka:
4
Journal Article
Recenzirano
With respondents increasingly completing web surveys on tablet computers and smartphones, several studies have examined the potential effects of the switch from PCs to mobile devices. The studies ...have looked at a range of outcomes, including completion rates, breakoffs, and item nonresponse. We carried out a field experiment that compared responses obtained by smartphones, tablets, and laptop computers, focusing on the potential effects of the different devices on measurement errors. We examined whether the differences across devices in screen size (and the related need to scroll to see the entire question or the full set of response options) might moderate the effects of response order, affect the strategy respondents used to decide which of two options was preferable, change the effect of question context, or influence the use of definitions. Our experiments were based on the principle of visual prominence—the idea that respondents are more likely to notice and consider information that is easy to see. The experiments were deliberately designed to maximize the impact of screen size on the results, since the screen size would affect the visual prominence of key information. However, like many of the prior studies examining mobile devices, although response order, context, and evaluation strategy affected the answers respondents gave, few device effects emerged.
Survey methodology Groves, Robert M; Fowler, Floyd J., Jr; Couper, Mick P ...
2009/01/01, 2009, 2011-09-20, 2013-05-21, 20090101, Letnik:
561
eBook
Praise for the First Edition: "The book makes a valuable contribution by synthesizing current research and identifying areas for future investigation for each aspect of the survey process." -Journal ...of the American Statistical Association "Overall, the high quality of the text material is matched by the quality of writing . . ." -Public Opinion Quarterly ". . . it should find an audience everywhere surveys are being conducted." -Technometrics This new edition of Survey Methodology continues to provide a state-of-the-science presentation of essential survey methodology topics and techniques. The volume's six world-renowned authors have updated this Second Edition to present newly emerging approaches to survey research and provide more comprehensive coverage of the major considerations in designing and conducting a sample survey. Key topics in survey methodology are clearly explained in the book's chapters, with coverage including sampling frame evaluation, sample design, development of questionnaires, evaluation of questions, alternative modes of data collection, interviewing, nonresponse, post-collection processing of survey data, and practices for maintaining scientific integrity. Acknowledging the growing advances in research and technology, the Second Edition features: Updated explanations of sampling frame issues for mobile telephone and web surveys New scientific insight on the relationship between nonresponse rates and nonresponse errors Restructured discussion of ethical issues in survey research, emphasizing the growing research results on privacy, informed consent, and confidentiality issues The latest research findings on effective questionnaire development techniques The addition of 50% more exercises at the end of each chapter, illustrating basic principles of survey design An expanded FAQ chapter that addresses the concerns that
accompany newly established methods Providing valuable and informative perspectives on the most modern methods in the field, Survey Methodology, Second Edition is an ideal book for survey research courses at the upper-undergraduate and graduate levels. It is also an indispensable reference for practicing survey methodologists and any professional who employs survey research methods.
To avoid asking respondents questions that do not apply to them, surveys often use filter questions that determine routing into follow-up items. Filter questions can be asked in an interleafed ...format, in which follow-up questions are asked immediately after each relevant filter, or in a grouped format, in which follow-up questions are asked only after multiple filters have been administered. Most previous investigations of filter questions have found that the grouped format collects more affirmative answers than the interleafed format. This result has been taken to mean that respondents in the interleafed format learn to shorten the questionnaire by answering the filter questions negatively. However, this is only one mechanism that could produce the observed differences between the two formats. Acquiescence, the tendency to answer yes to yes/no questions, could also explain the results. We conducted a telephone survey that linked filter question responses to high-quality administrative data to test two hypotheses about the mechanism of the format effect. We find strong support for motivated underreporting and less support for the acquiescence hypothesis. This is the first clear evidence that the grouped format results in more accurate answers to filter questions. However, we also find that the underreporting phenomenon does not always occur. These findings are relevant to all surveys that use multiple filter questions.