Around the world each year, millions of citizens turn out to vote but leave their ballots empty or spoil them. Increasingly, campaigns have emerged that promote “invalid” votes like these. Why do ...citizens choose to cast blank and spoiled votes? And how do campaigns mobilizing the invalid vote influence this decision? None of the Above answers these questions using evidence from presidential and gubernatorial elections in eighteen Latin American democracies. Author Mollie J. Cohen draws on a broad range of methods and sources, incorporating data from electoral management bodies, nationally representative surveys, survey experiments, focus groups, semi-structured interviews, and news sources. Contrary to received wisdom, this book shows that most citizens cast blank or spoiled votes in presidential elections on purpose. By participating in invalid vote campaigns, citizens can voice their concerns about low-quality candidates while also expressing a preference for high-quality democracy. Campaigns promoting blank and spoiled votes come about more often, and succeed at higher rates, when incumbent politicians undermine the quality of elections. Surprisingly, invalid vote campaigns can shore up the quality of democracy in the short term. None of the Above shows that swings in blank and spoiled vote rates can serve as a warning about the trajectory of a country’s democracy.
Self-report data collections, particularly through online measures, are ubiquitous in both experimental and non-experimental psychology. Invalid data can be present in such data collections for a ...number of reasons. One reason is careless or insufficient effort (C/IE) responding. The past decade has seen a rise in research on techniques to detect and remove these data before normal analysis (Huang, Curran, Keeney, Poposki, & DeShon, 2012; Johnson, 2005; Meade & Craig, 2012). The rigorous use of these techniques is valuable tool for the removal of error that can impact survey results (Huang, Liu, & Bowling, 2015). This research has encompassed a number of sub-fields of psychology, and this paper aims to integrate different perspectives into a review and assessment of current techniques, an introduction of new techniques, and a generation of recommendations for practical use. Concerns about C/IE responding are a factor any time self-report data are collected, and all such researchers should be well-versed on methods to detect this pattern of response.
Abstract
Objective
Base rates of invalidity in forensic neuropsychological contexts are well explored and believed to approximate 40%, whereas base rates of invalidity across clinical non-forensic ...contexts are relatively less known.
Methods
Adult-focused neuropsychologists (n = 178) were surveyed regarding base rates of invalidity across various clinical non-forensic contexts and practice settings. Median values were calculated and compared across contexts and settings.
Results
The median estimated base rate of invalidity across clinical non-forensic evaluations was 15%. When examining specific clinical contexts and settings, base rate estimates varied from 5% to 50%. Patients with medically unexplained symptoms (50%), external incentives (25%–40%), and oppositional attitudes toward testing (37.5%) were reported to have the highest base rates of invalidity. Patients with psychiatric illness, patients evaluated for attention deficit hyperactivity disorder, and patients with a history of mild traumatic brain injury were also reported to invalidate testing at relatively high base rates (approximately 20%). Conversely, patients presenting for dementia evaluation and patients with none of the previously mentioned histories and for whom invalid testing was unanticipated were estimated to produce invalid testing in only 5% of cases. Regarding practice setting, Veterans Affairs providers reported base rates of invalidity to be nearly twice that of any other clinical settings.
Conclusions
Non-forensic clinical patients presenting with medically unexplained symptoms, external incentives, or oppositional attitudes are reported to invalidate testing at base rates similar to that of forensic examinees. The impact of context-specific base rates on the clinical evaluation of invalidity is discussed.
•A reversible data hiding scheme for effectively reducing distortion is proposed.•Reduce the number of invalid shifting pixels in histogram shifting.•The proposed method has a higher embedding ...capacity.
In recent years, reversible data hiding (RDH), a new research hotspot in the field of information security, has been paid more and more attention by researchers. Most of the existing RDH schemes do not fully take it into account that natural image’s texture has influence on embedding distortion. The image distortion caused by embedding data in the image’s smooth region is much smaller than that in the unsmooth region, essentially, it is because embedding additional data in the smooth region corresponds to fewer invalid shifting pixels (ISPs) in histogram shifting. Thus, we propose a RDH scheme based on the images texture to reduce invalid shifting of pixels in histogram shifting. Specifically, first, a cover image is divided into two sub-images by the checkerboard pattern, and then each sub-image’s fluctuation values are calculated. Finally, additional data can be embedded into the region of sub-images with smaller fluctuation value preferentially. The experimental results demonstrate that the proposed method has higher capacity and better stego-image quality than some existing RDH schemes.
Surveys administered online have several benefits, but they are particularly prone to careless responding, which occurs when respondents fail to read item content or give sufficient attention, ...resulting in raw data that may not accurately reflect respondents' true levels of the constructs being measured. Careless responding can lead to various psychometric issues, potentially impacting any area of psychology that uses self-reported surveys and assessments. This review synthesizes the careless responding literature to provide a comprehensive understanding of careless responding and ways to prevent, identify, report, and clean careless responding from data sets. Further, we include recommendations for different levels of screening for careless responses. Finally, we highlight some of the most promising areas for future work on careless responding.
The number of Mendelian randomization analyses including large numbers of genetic variants is rapidly increasing. This is due to the proliferation of genome-wide association studies, and the desire ...to obtain more precise estimates of causal effects. However, some genetic variants may not be valid instrumental variables, in particular due to them having more than one proximal phenotypic correlate (pleiotropy).
We view Mendelian randomization with multiple instruments as a meta-analysis, and show that bias caused by pleiotropy can be regarded as analogous to small study bias. Causal estimates using each instrument can be displayed visually by a funnel plot to assess potential asymmetry. Egger regression, a tool to detect small study bias in meta-analysis, can be adapted to test for bias from pleiotropy, and the slope coefficient from Egger regression provides an estimate of the causal effect. Under the assumption that the association of each genetic variant with the exposure is independent of the pleiotropic effect of the variant (not via the exposure), Egger's test gives a valid test of the null causal hypothesis and a consistent causal effect estimate even when all the genetic variants are invalid instrumental variables.
We illustrate the use of this approach by re-analysing two published Mendelian randomization studies of the causal effect of height on lung function, and the causal effect of blood pressure on coronary artery disease risk. The conservative nature of this approach is illustrated with these examples.
An adaption of Egger regression (which we call MR-Egger) can detect some violations of the standard instrumental variable assumptions, and provide an effect estimate which is not subject to these violations. The approach provides a sensitivity analysis for the robustness of the findings from a Mendelian randomization investigation.
In self-report surveys, it is common that some individuals do not pay enough attention and effort to give valid responses. Our aim was to investigate the extent to which careless and insufficient ...effort responding contributes to the biasing of data. We performed analyses of dimensionality, internal structure, and data reliability of four personality scales (extroversion, conscientiousness, stability, and dispositional optimism) in two independent samples. In order to identify careless/insufficient effort (C/IE) respondents, we used a factor mixture model (FMM) designed to detect inconsistencies of response to items with different semantic polarity. The FMM identified between 4.4% and 10% of C/IE cases, depending on the scale and the sample examined. In the complete samples, all the theoretical models obtained an unacceptable fit, forcing the rejection of the starting hypothesis and making additional wording factors necessary. In the clean samples, all the theoretical models fitted satisfactorily, and the wording factors practically disappeared. Trait estimates in the clean samples were between 4.5% and 11.8% more accurate than in the complete samples. These results show that a limited amount of C/IE data can lead to a drastic deterioration in the fit of the theoretical model, produce large amounts of spurious variance, raise serious doubts about the dimensionality and internal structure of the data, and reduce the reliability with which the trait scores of all surveyed are estimated. Identifying and filtering C/IE responses is necessary to ensure the validity of research results.
Methods have been developed for Mendelian randomization that can obtain consistent causal estimates while relaxing the instrumental variable assumptions. These include multivariable Mendelian ...randomization, in which a genetic variant may be associated with multiple risk factors so long as any association with the outcome is via the measured risk factors (measured pleiotropy), and the MR‐Egger (Mendelian randomization‐Egger) method, in which a genetic variant may be directly associated with the outcome not via the risk factor of interest, so long as the direct effects of the variants on the outcome are uncorrelated with their associations with the risk factor (unmeasured pleiotropy). In this paper, we extend the MR‐Egger method to a multivariable setting to correct for both measured and unmeasured pleiotropy. We show, through theoretical arguments and a simulation study, that the multivariable MR‐Egger method has advantages over its univariable counterpart in terms of plausibility of the assumption needed for consistent causal estimation and power to detect a causal effect when this assumption is satisfied. The methods are compared in an applied analysis to investigate the causal effect of high‐density lipoprotein cholesterol on coronary heart disease risk. The multivariable MR‐Egger method will be useful to analyse high‐dimensional data in situations where the risk factors are highly related and it is difficult to find genetic variants specifically associated with the risk factor of interest (multivariable by design), and as a sensitivity analysis when the genetic variants are known to have pleiotropic effects on measured risk factors.