A Critique of the Cross-Lagged Panel Model Hamaker, Ellen L; Kuiper, Rebecca M; Grasman, Raoul P. P. P
Psychological methods,
03/2015, Letnik:
20, Številka:
1
Journal Article
Recenzirano
Odprti dostop
The cross-lagged panel model (CLPM) is believed by many to overcome the problems associated with the use of cross-lagged correlations as a way to study causal influences in longitudinal panel data. ...The current article, however, shows that if stability of constructs is to some extent of a trait-like, time-invariant nature, the autoregressive relationships of the CLPM fail to adequately account for this. As a result, the lagged parameters that are obtained with the CLPM do not represent the actual within-person relationships over time, and this may lead to erroneous conclusions regarding the presence, predominance, and sign of causal influences. In this article we present an alternative model that separates the within-person process from stable between-person differences through the inclusion of random intercepts, and we discuss how this model is related to existing structural equation models that include cross-lagged relationships. We derive the analytical relationship between the cross-lagged parameters from the CLPM and the alternative model, and use simulations to demonstrate the spurious results that may arise when using the CLPM to analyze data that include stable, trait-like individual differences. We also present a modeling strategy to avoid this pitfall and illustrate this using an empirical data set. The implications for both existing and future cross-lagged panel research are discussed.
This study deals with addictive acts that exhibit a stable pattern not intervening with the normal routine of daily life. Nevertheless, in the long term such behaviour may result in health damage. ...Alcohol consumption is an example of such addictive habit. The aim is to describe the process of addiction as a dynamical system in the way this is done in the natural and technological sciences. The dynamics of the addictive behaviour is described by a mathematical model consisting of two coupled difference equations. They determine the change in time of two state variables, craving and self-control. The model equations contain terms that represent external forces such as societal rules, peer influences and cues. The latter are formulated as events that are Poisson distributed in time. With the model it is shown how a person can get addicted when changing lifestyle. Although craving is the dominant variable in the process of addiction, the moment of getting dependent is clearly marked by a switch in a variable that fits the definition of addiction vulnerability in the literature. Furthermore, the way chance affects a therapeutic addiction intervention is analysed by carrying out a Monte Carlo simulation. Essential in the dynamical model is a nonlinear component which determines the configuration of the two stable states of the system: being dependent or not dependent. Under identical external conditions both may be stable (hysteresis). With the dynamical systems approach possible switches between the two states are explored (repeated relapses).
A Comprehensive Meta-Analysis of Money Priming Lodder, Paul; Ong, How Hwee; Grasman, Raoul P. P. P. ...
Journal of experimental psychology. General,
04/2019, Letnik:
148, Številka:
4
Journal Article
Recenzirano
Odprti dostop
Research on money priming typically investigates whether exposure to money-related stimuli can affect people's thoughts, feelings, motivations, and behaviors (for a review, see Vohs, 2015). Our study ...answers the call for a comprehensive meta-analysis examining the available evidence on money priming (Vadillo, Hardwicke, & Shanks, 2016). By conducting a systematic search of published and unpublished literature on money priming, we sought to achieve three key goals. First, we aimed to assess the presence of biases in the available published literature (e.g., publication bias). Second, in the case of such biases, we sought to derive a more accurate estimate of the effect size after correcting for these biases. Third, we aimed to investigate whether design factors such as prime type and study setting moderated the money priming effects. Our overall meta-analysis included 246 suitable experiments and showed a significant overall effect size estimate (Hedges' g = .31, 95% CI 0.26, 0.36). However, publication bias and related biases are likely given the asymmetric funnel plots, Egger's test and two other tests for publication bias. Moderator analyses offered insight into the variation of the money priming effect, suggesting for various types of study designs whether the effect was present, absent, or biased. We found the largest money priming effect in lab studies investigating a behavioral dependent measure using a priming technique in which participants actively handled money. Future research should use sufficiently powerful preregistered studies to replicate these findings.
Whether level 1 predictors should be centered per cluster has received considerable attention in the multilevel literature. While most agree that there is no one preferred approach, it has also been ...argued that cluster mean centering is desirable when the within-cluster slope and the between-cluster slope are expected to deviate, and the main interest is in the within-cluster slope. However, we show in a series of simulations that if one has a multilevel autoregressive model in which the level 1 predictor is the lagged outcome variable (i.e., the outcome variable at the previous occasion), cluster mean centering will in general lead to a downward bias in the parameter estimate of the within-cluster slope (i.e., the autoregressive relationship). This is particularly relevant if the main question is whether there is on average an autoregressive effect. Nonetheless, we show that if the main interest is in estimating the effect of a level 2 predictor on the autoregressive parameter (i.e., a cross-level interaction), cluster mean centering should be preferred over other forms of centering. Hence, researchers should be clear on what is considered the main goal of their study, and base their choice of centering method on this when using a multilevel autoregressive model.
A Dynamical Model of General Intelligence van der Maas, Han L. J; Dolan, Conor V; Grasman, Raoul P. P. P ...
Psychological review,
10/2006, Letnik:
113, Številka:
4
Journal Article
Recenzirano
Scores on cognitive tasks used in intelligence tests correlate positively with each other, that is, they display a positive manifold of correlations. The positive manifold is often explained by ...positing a dominant latent variable, the
g
factor, associated with a single quantitative cognitive or biological process or capacity. In this article, a new explanation of the positive manifold based on a dynamical model is proposed, in which reciprocal causation or mutualism plays a central role. It is shown that the positive manifold emerges purely by positive beneficial interactions between cognitive processes during development. A single underlying
g
factor plays no role in the model. The model offers explanations of important findings in intelligence research, such as the hierarchical factor structure of intelligence, the low predictability of intelligence from early childhood performance, the integration/differentiation effect, the increase in heritability of
g
, and the Jensen effect, and is consistent with current explanations of the Flynn effect.
Despite it being widely acknowledged that the most important function of memory is to facilitate the prediction of significant events in a complex world, no studies to date have investigated how our ...ability to infer associations across distinct but overlapping experiences is affected by the inclusion of threat memories. To address this question, participants (n = 35) encoded neutral predictive associations (A → B). The following day these memories were reactivated by pairing B with a new aversive or neutral outcome (B → C
) while pupil dilation was measured as an index of emotional arousal. Then, again 1 day later, the accuracy of indirect associations (A → C?) was tested. Associative inferences involving a threat learning memory were impaired whereas the initial memories were retroactively strengthened, but these effects were not moderated by pupil dilation at encoding. These results imply that a healthy memory system may compartmentalize episodic information of threat, and so hinders its recall when cued only indirectly. Malfunctioning of this process may cause maladaptive linkage of negative events to distant and benign memories, and thereby contribute to the development of clinical intrusions and anxiety.
Most data analyses rely on models. To complement statistical models, psychologists have developed cognitive models, which translate observed variables into psychologically interesting constructs. ...Response time models, in particular, assume that response time and accuracy are the observed expression of latent variables including 1) ease of processing, 2) response caution, 3) response bias, and 4) non-decision time. Inferences about these psychological factors, hinge upon the validity of the models’ parameters. Here, we use a blinded, collaborative approach to assess the validity of such model-based inferences. Seventeen teams of researchers analyzed the same 14 data sets. In each of these two-condition data sets, we manipulated properties of participants’ behavior in a two-alternative forced choice task. The contributing teams were blind to the manipulations, and had to infer what aspect of behavior was changed using their method of choice. The contributors chose to employ a variety of models, estimation methods, and inference procedures. Our results show that, although conclusions were similar across different methods, these "modeler’s degrees of freedom" did affect their inferences. Interestingly, many of the simpler approaches yielded as robust and accurate inferences as the more complex methods. We recommend that, in general, cognitive models become a typical analysis tool for response time data. In particular, we argue that the simpler models and procedures are sufficient for standard experimental designs. We finish by outlining situations in which more complicated models and methods may be necessary, and discuss potential pitfalls when interpreting the output from response time models.
Dropout from psychotherapy for borderline personality disorder (BPD) is a notorious problem. We investigated whether treatment, treatment format, treatment setting, substance use exclusion criteria, ...proportion males, mean age, country, and other variables influenced dropout.
From Pubmed, Embase, Cochrane, Psycinfo and other sources, 111 studies (159 treatment arms,
= 9100) of psychotherapy for non-forensic adult patients with BPD were included. Dropout per quarter during one year of treatment was analyzed on participant level with multilevel survival analysis, to deal with multiple predictors, nonconstant dropout chance over time, and censored data. Multiple imputation was used to estimate quarter of drop-out if unreported. Sensitivity analyses were done by excluding DBT-arms with deviating push-out rules.
Dropout was highest in the first quarter of treatment. Schema therapy had the lowest dropout overall, and mentalization-based treatment in the first two quarters. Community treatment by experts had the highest dropout. Moreover, individual therapy had lowest dropout, group therapy highest, with combined formats in-between. Other variables such as age or substance-use exclusion criteria were not associated with dropout.
The findings do not support claims that all treatments are equal, and indicate that efforts to reduce dropout should focus on early stages of treatment and on group treatment.
Many psychologists do not realize that exploratory use of the popular multiway analysis of variance harbors a multiple-comparison problem. In the case of two factors, three separate null hypotheses ...are subject to test (i.e., two main effects and one interaction). Consequently, the probability of at least one Type I error (if all null hypotheses are true) is 14 % rather than 5 %, if the three tests are independent. We explain the multiple-comparison problem and demonstrate that researchers almost never correct for it. To mitigate the problem, we describe four remedies: the omnibus
F
test, control of the familywise error rate, control of the false discovery rate, and preregistration of the hypotheses.