This paper addresses speeding, that is, "too fast" responses, in web surveys. Relying on the response process model, we argue that very short response times indicate low data quality, stemming from a ...lack of attention on the part of respondents. To identify speeding, prior research employed case-wise procedures. Using data from nine online surveys, we demonstrate that response behavior of individual respondents varies considerably during a survey. Thus, we use case- and page-wise procedures to capture speeding behavior that taps different, although related, phenomena. Moreover, page-specific speeding measures capture aspects of data quality that traditional quality measures do not cover. Employing both page-specific and case-wise speeding measures, we examine whether removing speeders makes a difference in substantive findings. The evidence indicates that removing "too fast" responses does not alter marginal distributions, irrespective of which speeder-correction technique is employed. Moreover, explanatory models yield, by and large, negligible coefficient differences (on average about one standard error). Only in exceptional cases differences exceed two standard errors. Our findings suggest that speeding primarily adds some random noise to the data and attenuate correlations, if it makes a difference at all. The paper concludes by discussing implications and limitations.
We report the results of the first large-scale, long-term, experimental test between two crowdsourcing methods: prediction markets and prediction polls. More than 2,400 participants made forecasts on ...261 events over two seasons of a geopolitical prediction tournament. Forecasters were randomly assigned to either prediction markets (continuous double auction markets) in which they were ranked based on earnings, or prediction polls in which they submitted probability judgments, independently or in teams, and were ranked based on Brier scores. In both seasons of the tournament, prices from the prediction market were more accurate than the simple mean of forecasts from prediction polls. However, team prediction polls outperformed prediction markets when forecasts were statistically aggregated using temporal decay, differential weighting based on past performance, and recalibration. The biggest advantage of prediction polls was at the beginning of long-duration questions. Results suggest that prediction polls with proper scoring feedback, collaboration features, and statistical aggregation are an attractive alternative to prediction markets for distilling the wisdom of crowds.
This paper was accepted by Uri Gneezy, behavioral economics
.
We examine political polarization over climate change within the American public by analyzing data from 10 nationally representative Gallup Polls between 2001 and 2010. We find that liberals and ...Democrats are more likely to report beliefs consistent with the scientific consensus and express personal concern about global warming than are conservatives and Republicans. Further, the effects of educational attainment and self-reported understanding on global warming beliefs and concern are positive for liberals and Democrats, but are weaker or negative for conservatives and Republicans. Last, significant ideological and partisan polarization has occurred on the issue of climate change over the past decade.
High response rates have traditionally been considered as one of the main indicators of survey quality. Obtaining high response rates is sometimes difficult and expensive, but clearly plays a ...beneficial role in terms of improving data quality. It is becoming increasingly clear, however, that simply boosting response to achieve a higher response rate will not in itself eradicate nonresponse bias. In this book the authors argue that high response rates should not be seen as a goal in themselves, but rather as part of an overall survey quality strategy based on random probability sampling and aimed at minimising nonresponse bias.
Key features of Improving Survey Response:
A detailed coverage of nonresponse issues, including a unique examination of cross-national survey nonresponse processes and outcomes.
A discussion of the potential causes of nonresponse and practical strategies to combat it.
A detailed examination of the impact of nonresponse and of techniques for adjusting for it once it has occurred.
Examples of best practices and experiments drawn from 25 European countries.
Supplemented by the European Social Survey (ESS) websites, containing materials for the measurement and analysis of nonresponse based on detailed country-level response process datasets.
The book is designed to help survey researchers and those commissioning surveys by explaining how to prioritise the reduction of nonresponse bias rather than focusing on increasing the overall response rate. It shows substantive researchers how nonresponse can impact on substantive outcomes.
The democratization of AI tools for content generation, combined with unrestricted access to mass media for all (e.g. through microblogging and social media), makes it increasingly hard for people to ...distinguish fact from fiction. This raises the question of how individual opinions evolve in such a networked environment without grounding in a known reality. The dominant approach to studying this problem uses simple models from the social sciences on how individuals change their opinions when exposed to their social neighborhood, and applies them on large social networks. We propose a novel model that incorporates two known social phenomena: (i) Biased Assimilation: the tendency of individuals to adopt other opinions if they are similar to their own; (ii) Backfire Effect: the fact that an opposite opinion may further entrench people in their stances, making their opinions more extreme instead of moderating them. To the best of our knowledge, this is the first DeGroot-type opinion formation model that captures the Backfire Effect. A thorough theoretical and empirical analysis of the proposed model reveals intuitive conditions for polarization and consensus to exist, as well as the properties of the resulting opinions.
Demonstrations that analyses of social media content can align with measurement from sample surveys have raised the question of whether survey research can be supplemented or even replaced with less ...costly and burdensome data mining of already-existing or "found" social media content. But just how trustworthy such measurement can be—say, to replace official statistics—is unknown. Survey researchers and data scientists approach key questions from starting assumptions and analytic traditions that differ on, for example, the need for representative samples drawn from frames that fully cover the population. New conversations between these scholarly communities are needed to understand the potential points of alignment and non-alignment. Across these approaches, there are major differences in (a) how participants (survey respondents and social media posters) understand the activity they are engaged in; (b) the nature of the data produced by survey responses and social media posts, and the inferences that are legitimate given the data; and (c) practical and ethical considerations surrounding the use of the data. Estimates are likely to align to differing degrees depending on the research topic and the populations under consideration, the particular features of the surveys and social media sites involved, and the analytic techniques for extracting opinions and experiences from social media. Traditional population coverage may not be required for social media content to effectively predict social phenomena to the extent that social media content distills or summarizes broader conversations that are also measured by surveys.
Although the purpose of questionnaire items is to obtain a person's opinion on a certain matter, a respondent's registered opinion may not reflect his or her "true" opinion because of random and ...systematic errors. Response styles (RSs) are a respondent's tendency to respond to survey questions in certain ways regardless of the content, and they contribute to systematic error. They affect univariate and multivariate distributions of data collected by rating scales and are alternative explanations for many research results. Despite this, RS are often not controlled in research. This article provides a comprehensive summary of the types of RS, lists their potential sources, and discusses ways to diagnose and control for them. Finally, areas for further research on RS are proposed. Adapted from the source document.
Few topics in public opinion research have attracted as much attention in recent years as partisan polarization in the American mass public. Yet, there has been considerably less investigation into ...whether people perceive the electorate to be polarized and the patterns of these perceptions. Building on work in social psychology, we argue that Americans perceive more polarization with respect to policy issues than actually exists, a phenomenon known as false polarization. Data from a nationally representative probability sample and a novel estimation strategy to make inferences about false polarization show that people significantly misperceive the public to be more divided along partisan lines than it is in reality. Also, people's misperceptions of opposing partisans are larger than those about their own party. We discuss the implications of these empirical patterns for American electoral politics.
Likert response surveys are widely applied in marketing, public opinion polls, epidemiological and economic disciplines. Theoretically, Likert mapping from real-world beliefs could lose significant ...amounts of information, as they are discrete categorical metrics. Similarly, the subjective nature of Likert-scale data capture, through questionnaires, holds the potential to inject researcher biases into the statistical analysis. Arguments and counterexamples are provided to show how this loss and bias can potentially be substantial under extreme polarization or strong beliefs held by the surveyed population, and where the survey instruments are poorly controlled. These theoretical possibilities were tested using a large survey with 14 Likert-scaled questions presented to 125,387 respondents in 442 distinct behavioral-demographic groups. Despite the potential for bias and information loss, the empirical analysis found strong support for an assumption of minimal information loss under Normal beliefs in Likert scaled surveys. Evidence from this study found that the Normal assumption is a very good fit to the majority of actual responses, the only variance from Normal being slightly platykurtic (kurtosis ~ 2) which is likely due to censoring of beliefs after the lower and upper extremes of the Likert mapping. The discussion and conclusions argue that further revisions to survey protocols can assure that information loss and bias in Likert-scaled data are minimal.