In the January 2022 issue of Perspectives, Götz et al. argued that small effects are “the indispensable foundation for a cumulative psychological science.” They supported their argument by claiming ...that (a) psychology, like genetics, consists of complex phenomena explained by additive small effects; (b) psychological-research culture rewards large effects, which means small effects are being ignored; and (c) small effects become meaningful at scale and over time. We rebut these claims with three objections: First, the analogy between genetics and psychology is misleading; second, p values are the main currency for publication in psychology, meaning that any biases in the literature are (currently) caused by pressure to publish statistically significant results and not large effects; and third, claims regarding small effects as important and consequential must be supported by empirical evidence or, at least, a falsifiable line of reasoning. If accepted uncritically, we believe the arguments of Götz et al. could be used as a blanket justification for the importance of any and all “small” effects, thereby undermining best practices in effect-size interpretation. We end with guidance on evaluating effect sizes in relative, not absolute, terms.
Concerns about the veracity of psychological research have been growing. Many findings in psychological science are based on studies with insufficient statistical power and nonrepresentative samples, ...or may otherwise be limited to specific, ungeneralizable settings or populations. Crowdsourced research, a type of large-scale collaboration in which one or more research projects are conducted across multiple lab sites, offers a pragmatic solution to these and other current methodological challenges. The Psychological Science Accelerator (PSA) is a distributed network of laboratories designed to enable and support crowdsourced research projects. These projects can focus on novel research questions or replicate prior research in large, diverse samples. The PSA’s mission is to accelerate the accumulation of reliable and generalizable evidence in psychological science. Here, we describe the background, structure, principles, procedures, benefits, and challenges of the PSA. In contrast to other crowdsourced research networks, the PSA is ongoing (as opposed to time limited), efficient (in that structures and principles are reused for different projects), decentralized, diverse (in both subjects and researchers), and inclusive (of proposals, contributions, and other relevant input from anyone inside or outside the network). The PSA and other approaches to crowdsourced psychological science will advance understanding of mental processes and behaviors by enabling rigorous research and systematic examination of its generalizability.
In response to the COVID-19 pandemic, the Psychological Science Accelerator coordinated three large-scale psychological studies to examine the effects of loss-gain framing, cognitive reappraisals, ...and autonomy framing manipulations on behavioral intentions and affective measures. The data collected (April to October 2020) included specific measures for each experimental study, a general questionnaire examining health prevention behaviors and COVID-19 experience, geographical and cultural context characterization, and demographic information for each participant. Each participant started the study with the same general questions and then was randomized to complete either one longer experiment or two shorter experiments. Data were provided by 73,223 participants with varying completion rates. Participants completed the survey from 111 geopolitical regions in 44 unique languages/dialects. The anonymized dataset described here is provided in both raw and processed formats to facilitate re-use and further analyses. The dataset offers secondary analytic opportunities to explore coping, framing, and self-determination across a diverse, global sample obtained at the onset of the COVID-19 pandemic, which can be merged with other time-sampled or geographic data.
Progress in psychology has been frustrated by challenges concerning replicability, generalizability, strategy selection, inferential reproducibility, and computational reproducibility. Although often ...discussed separately, these five challenges may share a common cause: insufficient investment of intellectual and nonintellectual resources into the typical psychology study. We suggest that the emerging emphasis on big-team science can help address these challenges by allowing researchers to pool their resources together to increase the amount available for a single study. However, the current incentives, infrastructure, and institutions in academic science have all developed under the assumption that science is conducted by solo principal investigators and their dependent trainees, an assumption that creates barriers to sustainable big-team science. We also anticipate that big-team science carries unique risks, such as the potential for big-team-science organizations to be co-opted by unaccountable leaders, become overly conservative, and make mistakes at a grand scale. Big-team-science organizations must also acquire personnel who are properly compensated and have clear roles. Not doing so raises risks related to mismanagement and a lack of financial sustainability. If researchers can manage its unique barriers and risks, big-team science has the potential to spur great progress in psychology and beyond.
How should romantic-relationship quality be approached psychometrically? This is a complicated theoretical and methodological challenge that we begin to address through three studies. In Study 1a, we ...identified 25 distinct romantic-relationship categories among 754 items from 26 romantic-relationship-quality instruments with a weak Jaccard index (0.38), indicating that the scales' item content was extremely heterogeneous. Study 1b then demonstrated limited structure validity evidence in 43 scale-development-validation articles of 23 of these 26 instruments. Finally, Study 2 surveyed 587 French-speaking participants in a romantic relationship on romantic-relationship quality. Applying a network-based model, we identified four dimensions, three of which are central to relationship quality. The inferences were mostly limited to French-speaking, monogamous, heterosexual women. To resolve challenges detected in the literature, we recommend a multicountry qualitative approach, more diverse sampling, better definitions of romantic-relationship quality, and a dynamic-systems approach to measuring romantic-relationship quality.
We expect that consensus meetings, where researchers come together to discuss their theoretical viewpoints, prioritize the factors they agree are important to study, standardize their measures, and ...determine a smallest effect size of interest, will prove to be a more efficient solution to the lack of coordination and integration of claims in science than integrative experiments.