Abstract
Objective
This study attempted to clarify the applicability of standard error (SE) terms in clinical research when examining the impact of short-term practice effects on cognitive ...performance via reliable change methodology.
Method
This study compared McSweeney’s SE of the estimate (SEest) to Crawford and Howell’s SE for prediction of the regression (SEpred) using a developmental sample of 167 participants with either normal cognition or mild cognitive impairment (MCI) assessed twice over 1 week. One-week practice effects in older adults: Tools for assessing cognitive change. Using these SEs, previously published standardized regression-based (SRB) reliable change prediction equations were then applied to an independent sample of 143 participants with MCI.
Results
This clinical developmental sample yielded nearly identical SE values (e.g., 3.697 vs. 3.719 for HVLT-R Total Recall SEest and SEpred, respectively), and the resultant SRB-based discrepancy z scores were comparable and strongly correlated (r = 1.0, p < .001). Consequently, observed follow-up scores for our sample with MCI were consistently below expectation compared to predictions based on Duff’s SRB algorithms.
Conclusions
These results appear to replicate and extend previous work showing that the calculation of the SEest and SEpred from a clinical sample of cognitively intact and MCI participants yields similar values and can be incorporated into SRB reliable change statistics with comparable results. As a result, neuropsychologists utilizing reliable change methods in research investigation (or clinical practice) should carefully balance mathematical accuracy and ease of use, among other factors, when determining which SE metric to use.
Objective: Despite expansion of telecommunication strategies across health services and data supporting feasibility of videoconference-based neuropsychological assessment, relatively little is known ...about teleneuropsychology (TeleNP) use in practice. The current COVID-19 pandemic provides an opportunity for greater use of TeleNP and understanding of neuropsychologists' experience with this unique assessment medium.
Methods: During the course of a no-cost global webinar related to practical/ethical considerations of TeleNP practice, attendees were invited to engage in a 26-question survey about their TeleNP use and related COVID-19 concerns. TeleNP practices before the COVID-19 pandemic and early on during the global outbreak were queried among survey participants, along with examination of TeleNP intentions following COVID-19.
Results: Multiple countries were represented across five continents, with two-thirds of respondents being from the United States. Approximately one-fourth of respondents reported using TeleNP for clinical interview, feedback, and intervention prior to the onset of the COVID-19 pandemic, and approximately one-tenth of individuals used TeleNP for testadministration. Increased use of TeleNP for clinical interview, feedback, and intervention was reported within the first few weeks of the global COVID-19 outbreak, though the use of TeleNP for testing remained relatively unchanged. Most respondents indicated an intention for future use of TeleNP.
Conclusions: Our findings suggest the use of TeleNP is increasing, although use of remote TeleNP testing is still developing. Findings also illustrate increasing use of TeleNP in the context of the COVID-19 pandemic and encourage follow-up investigation in future studies to understand the changing practices and rates of TeleNP provision over time.
Abstract
Background
The learning ratio (LR) is a novel learning slope score that was developed to identify learning more accurately by considering the proportion of information learned after the ...first trial of a multi-trial learning task. Specifically, LR is the number of items learned after trial one divided by the number of items yet to be learned. Although research on LR has been promising, convergent validation, clinical characterization, and demographic norming of this LR metric are warranted to understand its clinical utility when derived from the Rey Auditory Verbal Learning Test (RAVLT).
Method
Data from 674 robustly cognitively intact older participants from the Alzheimer’s Disease Neuroimaging Initiative (aged 54– 89) were used to calculate the LR metric. Comparison of LR’s relationship with standard memory measures was undertaken relative to other traditional learning slope metrics. In addition, retest reliability at 6, 12, and 24 months was examined, and demographically adjusted normative comparisons were developed.
Results
Lower LR scores were associated with poorer performances on memory measures, and LR scores outperformed traditional learning slope calculations across all analyses. Retest reliability exceeded acceptability thresholds across time, and demographically adjusted normative equations suggested better performance for cognitively intact participants than those with mild cognitive impairment.
Conclusions
These results suggest that this LR score possesses sound retest reliability and can better reflect learning capacity than traditional learning slope calculations. With the added development and validation of regression-based normative comparisons, these findings support the use of the RAVLT LR as a clinical tool to inform clinical decision-making and treatment.
A novel learning slope score - the Learning Ratio (LR) - has recently been developed that appears to be sensitive to memory performance and AD pathology more optimally than traditional learning slope ...calculations. While promising, this research to date has been both experimental and based on group differences, and therefore does not aid in the interpretation of individual LR performance for either clinical or research settings. The objective of the current study was to develop demographically-corrected normative data on these LR learning slopes on verbal learning measures from the Repeatable Battery for the Assessment of Neuropsychological Status (RBANS).
The current study examined the influence of age and education on LR metrics for the List Learning, Story Memory, and an Aggregated RBANS score in 200 cognitively intact adults aged 65 or older using linear regression.
Age and education correlated with most LR metrics, but no sex differences were observed. Linear regression permitted the prediction of LR values from age and education, which are then compared to observed LR values. The result is demographically-corrected T scores for these LR metrics.
By comparing observed and predicted LR scores calculated from regression-based prediction equations, this represents the first step towards interpretation of individual performances on this metric for clinical decision making and treatment planning purposes. With future replication in diverse and heterogenous samples, we hope to offer a new clinical tool for the examination of learning slopes in older adults.
Background: The Learning Ratio (LR) is a novel learning slope score that has been developed to reduce the inherent competition between the first trial and subsequent trials in traditional learning ...slopes. Recent findings suggest that LR is sensitive to AD pathology along the AD continuum - more so than the traditional learning calculations that employ raw changes across trials. However, research is still experimental and not yet directly applicable to clinical settings. Consequently, the objective of the current study was to develop demographically-corrected normative data on these LR learning slopes.
Method: The current study examined the influence of age and education on LR scores for the HVLT-R, BVMT-R, and an Aggregated HVLT-R/BVMT-R in 200 cognitively intact adults aged 65 years and older using linear regression.
Results: Age negatively correlated with all LR metrics, and education positively correlated with most. No sex differences were identified. LR values were predicted from age and education, which can be compared to observed LR values and converted into demographically-corrected T scores.
Conclusions: By comparing observed and predicted LR scores calculated from regression-based prediction equations, interpretations are permitted that aid in clinical decision making and treatment planning. Co-norming of the HVLT-R and BVMT-R also allows for comparisons between verbal and visual learning slope scores in individual patients. We hope normative data for LR enhances its utility as a clinical tool for examining learning slopes in older adults administered the HVLT-R and/or BVMT-R.
Practice effects on cognitive testing in mild cognitive impairment (MCI) and Alzheimer's disease (AD) remain understudied, especially with how they compare to biomarkers of AD.
The current study ...sought to add to this growing literature.
Cognitively intact older adults (n = 68), those with amnestic MCI (n = 52), and those with mild AD (n = 45) completed a brief battery of cognitive tests at baseline and again after one week, and they also completed a baseline amyloid PET scan, a baseline MRI, and a baseline blood draw to obtain APOE ɛ4 status.
The intact participants showed significantly larger baseline cognitive scores and practice effects than the other two groups on overall composite measures. Those with MCI showed significantly larger baseline scores and practice effects than AD participants on the composite. For amyloid deposition, the intact participants had significantly less tracer uptake, whereas MCI and AD participants were comparable. For total hippocampal volumes, all three groups were significantly different in the expected direction (intact > MCI > AD). For APOE ɛ4, the intact had significantly fewer copies of ɛ4 than MCI and AD. The effect sizes of the baseline cognitive scores and practice effects were comparable, and they were significantly larger than effect sizes of biomarkers in 7 of the 9 comparisons.
Baseline cognition and short-term practice effects appear to be sensitive markers in late life cognitive disorders, as they separated groups better than commonly-used biomarkers in AD. Further development of baseline cognition and short-term practice effects as tools for clinical diagnosis, prognostic indication, and enrichment of clinical trials seems warranted.
The debate over Hasher and Zacks’ effort hypothesis—that performance on effortful tasks by patients with depression will be disproportionately worse than their performance on automatic tasks—shows a ...need for additional research to settle whether or not this notion is “clinical lore.” In this study, we categorized 285 outpatient recipients of neuropsychological evaluations into three groups—No Depression, Mild-to-Moderate Depression, and Severe Depression—based on their Beck Depression Inventory-2 self-reports. We then compared these groups’ performances on both “automatic” and “effortful” versions of the Ruff 2 & 7 Selective Attention Test Total Speed and Total Accuracy Indices, the Digit Span subtest from the Wechsler Adult Intellectual Scale—Fourth Edition, and Trail Making Test Parts A and B, using a two-way (3 × 2) mixed multivariate analysis of variance. Patients with Mild-to-Moderate Depression or Severe Depression performed disproportionately worse than patients with No Depression in our sample on more effortful versions of only one of the four attention or executive functioning measures (Trail Making Test). Thus, these data failed to fully support a hypothesis of disproportionately worse performance on more effortful tasks. While this study failed to negate the effort hypothesis in some specific instances, particularly for use in the Trail Making Test, there is cause for caution in routinely applying the effort hypothesis when interpreting test findings in most clinical settings and for most measures.