In our study, we use the post-hypnotic suggestion of easy remembering to improve memory with long-lasting effects. We tested 24 highly suggestible participants in an online study. Participants ...learned word lists and recalled them later in a recognition memory task. At the beginning of the study, participants were hypnotized and the post-hypnotic suggestion to remember easily was associated with a cue that participants used during the recognition memory task. In a control condition, the same participants used a neutral cue. One week later, participants repeated both conditions with new word lists.
Participants were significantly faster and more confident in their recognition ratings in the easy-remembering condition compared to the control condition, and this effect persisted over one week. Crucially, the increased speed and confidence in the easy-remembering condition did not affect memory accuracy. That makes our hypnosis intervention promising for patients experiencing subjective memory impairments.
2343 (Learning and Memory), 2380 (Consciousness States), 3351 (Clinical Hypnosis)
•We tied the feeling of remembering easily to a post-hypnotic anchor•Participants remembered faster and more confident in the easy remembering condition•The post-hypnotic anchor still worked one week after the hypnosis session
Abstract
Due to the information overload in today’s digital age, people may sometimes feel pressured to process and judge information especially fast. In three experiments, we examined whether time ...pressure increases the repetition-based truth effect — the tendency to judge repeatedly encountered statements more likely as “true” than novel statements. Based on the Heuristic-Systematic Model, a dual-process model in the field of persuasion research, we expected that time pressure would boost the truth effect by increasing reliance on processing fluency as a presumably heuristic cue for truth, and by decreasing knowledge retrieval as a presumably slow and systematic process that determines truth judgments. However, contrary to our expectation, time pressure did not moderate the truth effect. Importantly, this was the case for difficult statements, for which most people lack prior knowledge, as well as for easy statements, for which most people hold relevant knowledge. Overall, the findings clearly speak against the conception of fast, fluency-based truth judgments versus slow, knowledge-based truth judgments. In contrast, the results are compatible with a referential theory of the truth effect that does not distinguish between different types of truth judgments. Instead, it assumes that truth judgments rely on the coherence of localized networks in people’s semantic memory, formed by both repetition and prior knowledge.
For several years, the public debate in psychological science has been dominated by what is referred to as the reproducibility crisis. This crisis has, inter alia, drawn attention to the need for ...proper control of statistical decision errors in testing psychological hypotheses. However, conventional methods of error probability control often require fairly large samples. Sequential statistical tests provide an attractive alternative: They can be applied repeatedly during the sampling process and terminate whenever there is sufficient evidence in the data for one of the hypotheses of interest. Thus, sequential tests may substantially reduce the required sample size without compromising predefined error probabilities. Herein, we discuss the most efficient sequential design, the sequential probability ratio test (SPRT), and show how it is easily implemented for a 2-sample t test using standard statistical software. We demonstrate, by means of simulations, that the SPRT not only reliably controls error probabilities but also typically requires substantially smaller samples than standard t tests and other common sequential designs. Moreover, we investigate the robustness of the SPRT against violations of its assumptions. Finally, we illustrate the sequential t test by applying it to an empirical example and provide recommendations on how psychologists can employ it in their own research to benefit from its desirable properties.
Translational Abstract
Fostered by a series of unsuccessful attempts to replicate seemingly well-established empirical results, the reproducibility crisis has dominated the public debate in psychological science for several years. Apart from increasing awareness for the consequences of questionable research practices, the crisis has drawn attention to the shortcomings of currently dominating statistical procedures. Critically, conventional methods that allow for control of both Type I and Type II statistical error probabilities-α and β, respectively-often require sample sizes much larger than typically employed. Therefore, we promote an alternative that requires substantially smaller sample sizes on average while still controlling error probabilities: sequential analysis. Unlike conventional tests, sequential tests are designed to be applied repeatedly during the sampling process and terminate as soon as there is sufficient evidence for one of the hypotheses of interest. Herein, we discuss the most efficient sequential design, the sequential probability ratio test (SPRT), and show how it is easily implemented for the common t test to compare means of 2 independent groups. We demonstrate by means of simulations that the SPRT reliably controls error probabilities and requires smaller samples than standard t tests or other common sequential designs. Moreover, we investigate the robustness of the SPRT against violations of its assumptions. Finally, we illustrate the sequential t test by applying it to an empirical example and provide concrete recommendations on how psychologists can employ it in their own research to benefit from its desirable properties.
In this comment, we report a simulation study that assesses error rates and average sample sizes required to reach a statistical decision for two sequential procedures, the sequential probability ...ratio test (SPRT) originally proposed by Wald (1947) and the independent segments procedure (ISP) recently suggested by Miller and Ulrich (2020). Following Miller and Ulrich (2020), we use sequential one-tailed t tests as examples. In line with the optimal efficiency properties of the SPRT already proven by Wald and Wolfowitz (1948), the SPRT outperformed the ISP in terms of efficiency without compromising error probability control. The efficiency gain in terms of sample size reduction achieved with the SPRT t test relative to the ISP may be as high as 25%. We thus recommend the SPRT as a default sequential testing procedure especially for detecting small or medium hypothesized effect sizes under H1 whenever a priori knowledge of the maximum sample size is not crucial. If a priori control of the maximum sample size is mandatory, however, the ISP is a very useful addition to the sequential testing literature.
Translational AbstractSequential tests analyze data sequentially when deciding between two statistical hypotheses H0 and H1. After each step, a decision is made whether to accept H0, to accept H1, or to continue sampling data, based on criteria that control error rates α (probability of accepting H1 when H0 holds) and β (probability of accepting H0 when H1 holds). Using hypotheses on means of two samples as an example, we compare the Sequential Probability Ratio Test (SPRT) originally proposed by Wald (1947) and the Independent Segments Procedure (ISP) recently proposed by Miller and Ulrich (2020). While the former method processes data cumulatively and one-by-one with the maximum sample size unknown beforehand, the latter method analyzes them independently in groups with both group size and maximum sample size known in advance. Our simulation studies show that both methods work well as they (a) keep their predefined α and β levels and (b) need smaller samples on average than the classical fixed-sample Neyman-Pearson test. However, in terms of efficiency (i.e., the average sample size required to reach a decision), the SPRT clearly outperforms the ISP (with a sample size reduction of up to 25% relative to the ISP). We conclude that the SPRT is the method of choice to minimize costs and time required for statistical decisions whenever a priori control of the maximum sample size is not necessary. If a priori control of the maximum sample size is mandatory, however, the ISP is a very useful alternative to the SPRT.
Randomized response models (RRMs) aim at increasing the validity of measuring sensitive attributes by eliciting more honest responses through anonymity protection of respondents. This anonymity ...protection is achieved by implementing randomization in the questioning procedure. On the other hand, this randomization increases the sampling variance and, therefore, increases sample size requirements. The present work aims at countering this drawback by combining RRMs with curtailed sampling, a sequential sampling design in which sampling is terminated as soon as sufficient information to decide on a hypothesis is collected. In contrast to nontruncated sequential designs, the curtailed sampling plan includes the definition of a maximum sample size and subsequent prevalence estimation is easy to conduct. Using this approach, resources can be saved such that the application of RRMs becomes more feasible. An R Shiny web application is provided for simplified application of the proposed procedures.
Translational Abstract
Survey data are often subject to response biases, especially when sensitive (e.g., socially undesirable) characteristics are studied. However, protecting the respondents' anonymity can facilitate honest responding. Randomized response models (RRMs) achieve this goal by encrypting responses via random noise. Unfortunately, this noise increases uncertainty in the data and, therefore, large samples are required for sufficiently informative inference. To remedy this disadvantage, we propose to combine RRMs with a simple sequential testing procedure, that is, curtailed sampling. Following this approach, sample size requirements are reduced while still controlling statistical error probabilities. This way, resources can be saved such that the application of RRMs becomes more feasible. In this article, we describe how a curtailed sampling plan for RRM applications can be devised and how the respective data can be analyzed. We illustrate the procedure by means of simulations and reanalysis of empirical data. Additionally, we provide an easy-to-use R Shiny web application for simple implementation of the described procedures.
Bayesian t tests have become increasingly popular alternatives to null-hypothesis significance testing (NHST) in psychological research. In contrast to NHST, they allow for the quantification of ...evidence in favor of the null hypothesis and for optional stopping. A major drawback of Bayesian t tests, however, is that error probabilities of statistical decisions remain uncontrolled. Previous approaches in the literature to remedy this problem require time-consuming simulations to calibrate decision thresholds. In this article, we propose a sequential probability ratio test that combines Bayesian t tests with simple decision criteria developed by Abraham Wald in 1947. We discuss this sequential procedure, which we call Waldian t test, in the context of three recently proposed specifications of Bayesian t tests. Waldian t tests preserve the key idea of Bayesian t tests by assuming a distribution for the effect size under the alternative hypothesis. At the same time, they control expected frequentist error probabilities, with the nominal Type I and Type II error probabilities serving as upper bounds to the actual expected error rates under the specified statistical models. Thus, Waldian t tests are fully justified from both a Bayesian and a frequentist point of view. We highlight the relationship between Bayesian and frequentist error probabilities and critically discuss the implications of conventional stopping criteria for sequential Bayesian t tests. Finally, we provide a user-friendly web application that implements the proposed procedure for interested researchers.
Translational Abstract
Bayesian t tests have become increasingly popular in psychological research. In contrast to classical test procedures, Bayesian tests can measure statistical evidence in favor of the null hypothesis and allow for optional stopping. Yet, probabilities of statistical decision errors (i.e., falsely rejecting a hypothesis when it is true) are not explicitly controlled. In this article, we propose a sequential test procedure where Bayesian t tests are calculated repeatedly after each additional observation. The sample size is increased until the test exceeds a predefined threshold. We call the proposed procedure Waldian t test because it is a straightforward combination of Bayesian t tests with Abraham Wald's sequential probability ratio test. We illustrate the procedure in the context of three different types of default and informed Bayesian t tests, and show how it satisfies both frequentist (i.e., controlling error probabilities) and Bayesian (i.e., measuring statistical evidence) desiderata. We also highlight the relationship between frequentist and Bayesian error probabilities and critically discuss the implications of conventional stopping criteria for sequential Bayesian t tests. Finally, we provide a user-friendly web application that implements Waldian t tests for interested researchers.
The repetition-induced truth effect refers to a phenomenon where people rate repeated statements as more likely true than novel statements. In this paper, we document
qualitative
individual ...differences in the effect. While the overwhelming majority of participants display the usual
positive
truth effect, a minority are the opposite—they reliably discount the validity of repeated statements, what we refer to as
negative
truth effect. We examine eight truth-effect data sets where individual-level data are curated. These sets are composed of 1105 individuals performing 38,904 judgments. Through Bayes factor model comparison, we show that reliable negative truth effects occur in five of the eight data sets. The negative truth effect is informative because it seems unreasonable that the mechanisms mediating the positive truth effect are the same that lead to a discounting of repeated statements’ validity. Moreover, the presence of qualitative differences motivates a different type of analysis of individual differences based on ordinal (i.e., Which sign does the effect have?) rather than metric measures. To our knowledge, this paper reports the first such reliable qualitative differences in a cognitive task.
Stimulated by William H. Batchelder’s seminal contributions in the 1980s and 1990s, multinomial processing tree (MPT) modeling has become a powerful and frequently used method in various research ...fields, most prominently in cognitive psychology and social cognition research. MPT models allow for estimation of, and statistical tests on, parameters that represent psychological processes underlying responses to cognitive tasks. Therefore, their use has also been proposed repeatedly for purposes of psychological assessment, for example, in clinical settings to identify specific cognitive deficits in individuals. However, a considerable drawback of individual MPT analyses emerges from the limited number of data points per individual, resulting in estimation bias, large standard errors, and low power of statistical tests. Classical test procedures such as Neyman–Pearson tests often require very large sample sizes to ensure sufficiently low Type 1 and Type 2 error probabilities. Herein, we propose sequential probability ratio tests (SPRTs) as an efficient alternative. Unlike Neyman–Pearson tests, sequential tests continuously monitor the data and terminate when a predefined criterion is met. As a consequence, SPRTs typically require only about half of the Neyman–Pearson sample size without compromising error probability control. We illustrate the SPRT approach to statistical inference for simple hypotheses in single-parameter MPT models. Moreover, a large-sample approximation, based on ML theory, is presented for typical MPT models with more than one unknown parameter. We evaluate the properties of the proposed test procedures by means of simulations. Finally, we discuss benefits and limitations of sequential MPT analysis.
•Multinomial processing tree (MPT) models measure latent psychological processes.•Conventional hypothesis tests on MPT parameters often require large sample sizes.•We propose sequential probability ratio tests (SPRTs) as an efficient alternative.•We illustrate sequential procedures for both simple and composite MPT hypotheses.•On average, SPRTs require up to 50% fewer observations than conventional procedures.
Bayesian model comparison (BMC) offers a principled approach to assessing the relative merits of competing computational models and propagating uncertainty into model selection decisions. However, ...BMC is often intractable for the popular class of hierarchical models due to their high-dimensional nested parameter structure. To address this intractability, we propose a deep learning method for performing BMC on any set of hierarchical models which can be instantiated as probabilistic programs. Since our method enables amortized inference, it allows efficient re-estimation of posterior model probabilities and fast performance validation prior to any real-data application. In a series of extensive validation studies, we benchmark the performance of our method against the state-of-the-art bridge sampling method and demonstrate excellent amortized inference across all BMC settings. We then showcase our method by comparing four hierarchical evidence accumulation models that have previously been deemed intractable for BMC due to partly implicit likelihoods. Additionally, we demonstrate how transfer learning can be leveraged to enhance training efficiency. We provide reproducible code for all analyses and an open-source implementation of our method. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
Kinnell and Dennis (2012) showed that the list length effect in recognition memory is only observed for homogeneous stimulus material. On the basis of the global matching model MINERVA 2 (Hintzman, ...1986, 1988), we offer a theoretical explanation for this finding. According to our analysis, homogeneous material immunizes against the disruptive influence of preexperimental items, which might mask the intralist interference predicted by global matching models for familiar heterogeneous material. We tested our approach in three experiments. In Experiment 1, we found list length effects for homogeneous photographs of flowers and landscapes. In Experiment 2 and 3, we presented heterogeneous photographs of scenes (Experiment 2) and faces (Experiment 3). List length effects were only found if these photographs were homogenized by the use of image-processing filters. We further show that our explanation is also in line with the results of Dennis and Chapman (2010) who found an inverse list length effect. Overall, our results provide evidence for a global matching account of familiarity.