Cognitive control theory suggests that goal-directed behavior is governed by a dynamic interplay between areas of the prefrontal cortex. Critical to cognitive control is the detection and resolution ...of competing stimulus or response representations (i.e., conflict). Event-related potential (ERP) research provides a window into the nature and precise temporal sequence of conflict monitoring. We critically review the research on conflict-related ERPs, including the error-related negativity (ERN), Flanker N2, Stroop N450 and conflict slow potential (conflict SP or negative slow wave NSW), and provide an analysis of how these ERPs inform conflict monitoring theory. Overall, there is considerable evidence that amplitude of the ERN is sensitive to the degree of response conflict, consistent with a role in conflict monitoring. It remains unclear, however, to what degree contextual, individual, affective, and motivational factors influence ERN amplitudes and how ERN amplitudes are related to regulative changes in behavior. The Flanker N2, Stroop N450, and conflict SP ERPs represent distinct conflict-monitoring processes that reflect conflict detection (N2, N450) and conflict adjustment or resolution processes (N2, conflict SP). The investigation of conflict adaptation effects (i.e., sequence or sequential trial effects) shows that the N2 and conflict SP reflect post-conflict adjustments in cognitive control, but the N450 generally does not. Conflict-related ERP research provides a promising avenue for understanding the effects of individual differences on cognitive control processes in healthy, neurologic and psychiatric populations. Comparisons between the major conflict-related ERPs and suggestions for future studies to clarify the nature of conflict-related neural processes are provided.
•The error-related negativity (ERN) is modulated by attention to the target stimulus.•N2, N450, and conflict SP are distinct potentials.•Conflict adaptation provides a tool to examine conflict-related control.•Application in clinical populations could lead to treatment options.
What do people feel like doing after they have exerted cognitive effort or are bored? Here, we empirically test whether people are drawn to rewards (at the neural level) following cognitive effort ...and boredom. This elucidates the experiences and consequences of engaging in cognitive effort, and compares it to the consequences of experiencing boredom, an affective state with predicted similar motivational consequences. Event-related potentials were recorded after participants (N = 243) were randomized into one of three conditions – boredom (passively observing strings of numbers), cognitive effort (adding 3 to each digit of a four-digit number), or control. In the subsequent task, we focused on the feedback negativity (FN) to assess the brain's immediate response to the presence or absence of reward. Phenomenologically, participants in the boredom condition reported more fatigue than those in the cognitive effort condition, despite reporting exerting less effort. Results suggest participants in the boredom condition exhibited larger FN amplitude than participants in the control condition, while the cognitive effort condition was neither different from boredom nor control. The neural and methodological implications for ego depletion research, including issues of replicability, are discussed.
•A boredom induction led to more subjective fatigue than a cognitive effort condition.•Reward sensitivity larger following boredom condition than control condition.•Reward sensitivity following cognitive effort did not differ from boredom nor control.•No relation between self-report fatigue and reward sensitivity.
There is increasing focus across scientific fields on adequate sample sizes to ensure non-biased and reproducible effects. Very few studies, however, report sample size calculations or even the ...information needed to accurately calculate sample sizes for grants and future research. We systematically reviewed 100 randomly selected clinical human electrophysiology studies from six high impact journals that frequently publish electroencephalography (EEG) and event-related potential (ERP) research to determine the proportion of studies that reported sample size calculations, as well as the proportion of studies reporting the necessary components to complete such calculations. Studies were coded by the two authors blinded to the other's results. Inter-rater reliability was 100% for the sample size calculations and kappa above 0.82 for all other variables. Zero of the 100 studies (0%) reported sample size calculations. 77% utilized repeated-measures designs, yet zero studies (0%) reported the necessary variances and correlations among repeated measures to accurately calculate future sample sizes. Most studies (93%) reported study statistical values (e.g., F or t values). Only 40% reported effect sizes, 56% reported mean values, and 47% reported indices of variance (e.g., standard deviations/standard errors). Absence of such information hinders accurate determination of sample sizes for study design, grant applications, and meta-analyses of research and whether studies were adequately powered to detect effects of interest. Increased focus on sample size calculations, utilization of registered reports, and presenting information detailing sample size calculations and statistics for future researchers are needed and will increase sample size-related scientific rigor in human electrophysiology research.
•Hypothesized sample size calculations are rarely reported.•100 randomly selected ERP/EEG articles•0 of 100 articles reported sample size calculations•0% reported information needed for future sample size calculations•Authors should be reporting sample size information to improve rigor.
Methodological reporting guidelines for studies of ERPs were updated in Psychophysiology in 2014. These guidelines facilitate the communication of key methodological parameters (e.g., preprocessing ...steps). Failing to report key parameters represents a barrier to replication efforts, and difficulty with replicability increases in the presence of small sample sizes and low statistical power. We assessed whether guidelines are followed and estimated the average sample size and power in recent research. Reporting behavior, sample sizes, and statistical designs were coded for 150 randomly sampled articles from five high‐impact journals that frequently published ERP research from 2011 to 2017. An average of 63% of guidelines were reported, and reporting behavior was similar across journals, suggesting that gaps in reporting is a shortcoming of the field rather than any specific journal. Publication of the guidelines article had no impact on reporting behavior, suggesting that editors and peer reviewers are not enforcing these recommendations. The average sample size per group was 21. Statistical power was conservatively estimated as .72‒.98 for a large effect size, .35‒.73 for a medium effect, and .10‒.18 for a small effect. These findings indicate that failing to report key guidelines is ubiquitous and that ERP studies are primarily powered to detect large effects. Such low power and insufficient following of reporting guidelines represent substantial barriers to replication efforts. The methodological transparency and replicability of studies can be improved by the open sharing of processing code and experimental tasks and by a priori sample size calculations to ensure adequately powered studies.
Methodological reporting guidelines for studies of ERPs identify the key parameters that are necessary for replication efforts. The present study estimates that only 63% of key parameters are reported in 150 articles published from 2011‒2017. Publication of the 2014 guidelines was not associated with a change in reporting. For these articles, the average sample size per group was 21, and the statistical power was estimated as .72‒.98 for a large effect size, .35‒.73 for a medium effect, and .10‒.18 for a small effect. Failure to report key parameters and low statistical power represent substantial barriers to replication efforts.
► Conflict adaptation is present in the N2 and P3 components of the ERP. ► Sequential incongruent trials are associated with decreased N2 amplitudes. ► Sequential congruent trials are associated with ...increased N2 amplitudes. ► Results suggest adaptive changes in cognitive control resources. ► Findings are consistent with the conflict monitoring theory.
The purpose of this study was to investigate the cognitive control process of conflict adaptation and the recruitment of cognitive control across sequential trials-termed higher-order trial effects-using the N2 and P3 components of the scalp-recorded event-related potential (ERP). High-density ERPs were obtained from 181 healthy individuals (93 female, 88 male) during a modified Eriksen flanker task. Behavioral measures (i.e., error rates, reaction times RTs) and N2 and P3 amplitudes showed reliable conflict adaptation (i.e., previous-trial congruencies influenced current-trial measures). Higher-order trial effects were quantified across multiple sequential presentations of congruent or incongruent trials (e.g., four consecutive incongruent trials). For higher-order trial effects, P3 amplitudes and RTs reliably decreased across both congruent and incongruent trials. Consistent with the conflict monitoring theory, N2 amplitudes decreased across incongruent trials and increased across congruent trials. N2 amplitudes were positively correlated with incongruent-trial RTs; no significant correlations were found for P3 amplitudes and RTs. Effects remained when stimulus–response repetitions were removed. Results indicate that RTs and ERP measures are sensitive to modulations of cognitive control associated with conflict across multiple congruent and incongruent trials. Implications for the conflict monitoring theory of cognitive control are discussed.
In studies of event-related brain potentials (ERPs), numerous decisions about data processing are required to extract ERP scores from continuous data. Unfortunately, the systematic impact of these ...choices on the data quality and psychometric reliability of ERP scores or even ERP scores themselves is virtually unknown, which is a barrier to the standardization of ERPs. The aim of the present study was to optimize processing pipelines for the error-related negativity (ERN) and error positivity (Pe) by considering a multiverse of data processing choices. A multiverse analysis of a data processing pipeline examines the impact of a large set of different reasonable choices to determine the robustness of effects, such as the impact of different decisions on between-trial standard deviations (i.e., data quality) and between-condition differences (i.e., experimental effects). ERN and Pe data from 298 healthy young adults were used to determine the impact of different methodological choices on data quality and experimental effects (correct vs. error trials) at several key stages: highpass filtering, lowpass filtering, ocular artifact correction, reference, baseline adjustment, scoring sensors, and measurement procedure. This multiverse analysis yielded 3,456 ERN scores and 576 Pe scores per person. An optimized pipeline for ERN included a 15 Hz lowpass filter, ICA-based ocular artifact correction, and a region of interest (ROI) approach to scoring. For Pe, the optimized pipeline included a 0.10 Hz highpass filter, 30 Hz lowpass filter, regression-based ocular artifact correction, a -200 to 0 ms baseline adjustment window, and an ROI approach to scoring. The multiverse approach can be used to optimize pipelines for eventual standardization, which would support efforts toward establishing normative ERP databases. The proposed process of analyzing the data-processing multiverse of ERP scores paves the way for better refinement, identification, and selection of data processing parameters, ultimately improving the precision and utility of ERPs.
There is considerable variability in the quantification of event‐related potential (ERP) amplitudes and latencies. We examined susceptibility of ERP quantification measures to incremental increases ...in background noise through published ERP data and simulations. Measures included mean amplitude, adaptive mean, peak amplitude, peak latency, and centroid latency. Results indicated mean amplitude was the most robust against increases in background noise. The adaptive mean measure was more biased, but represented an efficient estimator of the true ERP signal particularly for individual‐subject latency variability. Strong evidence is provided against using peak amplitude. For latency measures, the peak latency measure was less biased and less efficient than the centroid latency measurement. Results emphasize the prudence in reporting the number of trials retained for averaging as well as noise estimates for groups and conditions when comparing ERPs.
Statistical training psychology focuses on frequentist methods. Bayesian methods are an alternative to standard frequentist methods. This article provides researchers with an introduction to ...fundamental ideas in Bayesian modeling. We use data from an electroencephalogram (EEG) and anxiety study to illustrate Bayesian models. Specifically, the models examine the relationship between error-related negativity (ERN), a particular event-related potential, and trait anxiety. Methodological topics covered include: how to set up a regression model in a Bayesian framework, specifying priors, examining convergence of the model, visualizing and interpreting posterior distributions, interval estimates, expected and predicted values, and model comparison tools. We also discuss situations where Bayesian methods can outperform frequentist methods as well has how to specify more complicated regression models. Finally, we conclude with recommendations about reporting guidelines for those using Bayesian methods in their own research. We provide data and R code for replicating our analyses.
•Introduces key Bayesian concepts.•Illustrates how to set up a Bayesian model with a particular emphasis on regression models.•Describes evaluation and interpretation of models.•Provides practice guidelines for researchers wishing to use Bayesian models.•Includes R code and data for replicating analyses.
In studies of event‐related brain potentials (ERPs), difference scores between conditions in a task are frequently used to isolate neural activity for use as a dependent or independent variable. ...Adequate score reliability is a prerequisite for studies examining relationships between ERPs and external correlates, but there is no extensive treatment on the suitability of the various available approaches to estimating difference score reliability that focus on ERP research. In the present study, we provide formulas from classical test theory and generalizability theory for estimating the internal consistency of subtraction‐based and residualized difference scores. These formulas are then applied to error‐related negativity (ERN) and reward positivity (RewP) difference scores from the same sample of 117 participants. Analyses demonstrate that ERN difference scores can be reliable, which supports their use in studies of individual differences. However, RewP difference scores yielded poor reliability due to the high correlation between the constituent reward and non‐reward ERPs. Findings emphasize that difference score reliability largely depends on the internal consistency of constituent scores and the correlation between those scores. Furthermore, generalizability theory yields more suitable estimates of internal consistency for subtraction‐based difference scores than classical test theory. We conclude that ERP difference scores can show adequate reliability and be useful for isolating neural activity in studies of individual differences.
In studies of event‐related brain potentials (ERPs), difference scores are used to isolate neural activity for statistical analysis, despite concerns about their low internal consistency. This manuscript describes the optimal approaches for estimating the internal consistency of ERP difference scores and clarifies the measurement characteristics that improve their reliability.
Barriers to accessing scientific findings contribute to knowledge inequalities based on financial resources and decrease the transparency and rigor of scientific research. Recent initiatives aim to ...improve access to research as well as methodological rigor via transparency and openness. We sought to determine the impact of such initiatives on open access publishing in the sub-area of human electrophysiology and the impact of open access on the attention articles received in the scholarly literature and other outlets. Data for 35,144 articles across 967 journals from the last 20 years were examined. Approximately 35% of articles were open access, and the rate of publication of open-access articles increased over time. Open access articles showed 9 to 21% more PubMed and CrossRef citations and 39% more Altmetric mentions than closed access articles. Green open access articles (i.e., author archived) did not differ from non-green open access articles (i.e., publisher archived) with respect to citations and were related to higher Altmetric mentions. These findings demonstrate that open-access publishing is increasing in popularity in the sub-area of human electrophysiology and that open-access articles enjoy the “open access advantage” in citations similar to the larger scientific literature. The benefit of the open access advantage may motivate researchers to make their publications open access and pursue publication outlets that support it. In consideration of the direct connection between citations and journal impact factor, journal editors may improve the accessibility and impact of published articles by encouraging authors to self-archive manuscripts on preprint servers.
•Barriers to accessing science contributes to knowledge inequalities•35% of articles published in the last 20 years in electrophysiology are open access.•Open access articles received 9–21% more citations and 39% more Altmetric mentions.•Green open access (author archived) enjoyed similar benefit as Gold open access.•Studies of human electrophysiology enjoy the “open access advantage” in citations.