Competing theories of dyslexia posit that reading difficulties arise from impaired visual, auditory, phonological, or statistical learning mechanisms. Importantly, many theories posit that dyslexia ...reflects a cascade of impairments emanating from a single “core deficit”. Here we report two studies evaluating core deficit and multifactorial models. In Study 1, we use publicly available data from the Healthy Brain Network to test the accuracy of phonological processing measures for predicting dyslexia diagnosis and find that over 30% of cases are misclassified (sensitivity = 66.7%; specificity = 68.2%). In Study 2, we collect a battery of psychophysical measures of visual motion processing and standardized measures of phonological processing in 106 school‐aged children to investigate whether dyslexia is best conceptualized under a core‐deficit model, or as a disorder with heterogenous origins. Specifically, by capitalizing on the drift diffusion model to analyze performance on a visual motion discrimination experiment, we show that deficits in visual motion processing, perceptual decision‐making, and phonological processing manifest largely independently. Based on statistical models of how variance in reading skill is parceled across measures of visual processing, phonological processing, and decision‐making, our results challenge the notion that a unifying deficit characterizes dyslexia. Instead, these findings indicate a model where reading skill is explained by several distinct, additive predictors, or risk factors, of reading (dis)ability.
Using predictors from a visual motion processing experiment and linguistic measures, we show that a single‐mechanism model of reading disability cannot account for the range of linguistic and sensory processing outcomes observed in children. We propose an additive risk factor model where different aspects of sensory, cognitive and language function each contribute independently to reading development.
In a noisy environment, visual perception of articulatory movements improves natural speech intelligibility. Parallel to phonemic processing based on auditory signal, visemic processing constitutes a ...counterpart based on avisemesa, the distinctive visual units of speech. Aiming at investigating the neural substrates of visemic processing in a disturbed environment, we carried out a simultaneous fMRIaEEG experiment based on discriminating syllabic minimal pairs involving three phonological contrasts, each bearing on a single phonetic feature characterised by different degrees of visual distinctiveness. The contrasts involved either labialisation of the vowels, or place of articulation or voicing of the consonants. Audiovisual consonantavowel syllable pairs were presented either with a static facial configuration or with a dynamic display of articulatory movements related to speech production. In the sound-disturbed MRI environment, the significant improvement of syllabic discrimination achieved in the dynamic audiovisual modality, compared to the static audiovisual modality was associated with activation of the occipito-temporal cortex (MT + V5) bilaterally, and of the left premotor cortex. While the former was activated in response to facial movements independently of their relation to speech, the latter was specifically activated by phonological discrimination. During fMRI, significant evoked potential responses to syllabic discrimination were recorded around 150 and 250 ms following the onset of the second stimulus of the pairs, whose amplitude was greater in the dynamic compared to the static audiovisual modality. Our results provide arguments for the involvement of the speech motor cortex in phonological discrimination, and suggest a multimodal representation of speech units.
Most current Alzheimer's disease (AD) and mild cognitive disorders (MCI) studies use single data modality to make predictions such as AD stages. The fusion of multiple data modalities can provide a ...holistic view of AD staging analysis. Thus, we use deep learning (DL) to integrally analyze imaging (magnetic resonance imaging (MRI)), genetic (single nucleotide polymorphisms (SNPs)), and clinical test data to classify patients into AD, MCI, and controls (CN). We use stacked denoising auto-encoders to extract features from clinical and genetic data, and use 3D-convolutional neural networks (CNNs) for imaging data. We also develop a novel data interpretation method to identify top-performing features learned by the deep-models with clustering and perturbation analysis. Using Alzheimer's disease neuroimaging initiative (ADNI) dataset, we demonstrate that deep models outperform shallow models, including support vector machines, decision trees, random forests, and k-nearest neighbors. In addition, we demonstrate that integrating multi-modality data outperforms single modality models in terms of accuracy, precision, recall, and meanF1 scores. Our models have identified hippocampus, amygdala brain areas, and the Rey Auditory Verbal Learning Test (RAVLT) as top distinguished features, which are consistent with the known AD literature.
Computerized cognitive training is gaining empirical support for use in the treatment of schizophrenia (SZ). Although cognitive training is efficacious for SZ at a group level when delivered in ...sufficiently intensive doses (eg, 30-50 h), there is variability in individual patient response. The identification of biomarkers sensitive to the neural systems engaged by cognitive training interventions early in the course of treatment could facilitate personalized assignment to treatment. This proof-of-concept study was conducted to determine whether mismatch negativity (MMN), an event-related potential index of auditory sensory discrimination associated with cognitive and psychosocial functioning, would predict gains in auditory perceptual learning and exhibit malleability after initial exposure to the early stages of auditory cognitive training in SZ. MMN was assessed in N=28 SZ patients immediately before and after completing 1 h of a speeded time-order judgment task of two successive frequency-modulated sweeps (Posit Science 'Sound Sweeps' exercise). All SZ patients exhibited the expected improvements in auditory perceptual learning over the 1 h training period (p<0.001), consistent with previous results. Larger MMN amplitudes recorded both before and after the training exercises were associated with greater gains in auditory perceptual learning (r=-0.5 and r=-0.67, respectively, p's<0.01). Significant pretraining vs posttraining MMN amplitude reduction was also observed (p<0.02). MMN is a sensitive index of the neural systems engaged in a single session of auditory cognitive training in SZ. These findings encourage future trials of MMN as a biomarker for individual assignment, prediction, and/or monitoring of patient response to procognitive interventions, including auditory cognitive training in SZ.
Training on one task (task A) can disrupt learning on a subsequently trained task (task B), illustrating anterograde learning interference. We asked whether the induction of anterograde learning ...interference depends on the learning stage that task A has reached when the training on task B begins. To do so, we drew on previous observations in perceptual learning in which completing all training on one task before beginning training on another task (blocked training) yielded markedly different learning outcomes than alternating training between the same two tasks for the same total number of trials (interleaved training). Those blocked versus interleaved contrasts suggest that there is a transition between two differentially vulnerable learning stages that is related to the number of consecutive training trials on each task, with interleaved training presumably tapping acquisition, and blocked training tapping consolidation. Here, we used the blocked versus interleaved paradigm in auditory perceptual learning in a case in which blocked training generated anterograde-but not its converse, retrograde-learning interference (A→B, not B←A). We report that anterograde learning interference of training on task A (interaural time difference discrimination) on learning on task B (interaural level difference discrimination) occurred with blocked training and diminished with interleaved training, with faster rates of interleaving leading to less interference. This pattern held for across-day, within-session, and offline learning. Thus, anterograde learning interference only occurred when the number of consecutive training trials on task A surpassed some critical value, consistent with other recent evidence that anterograde learning interference only arises when learning on task A has entered the consolidation stage.
Abstract
Social learning (SL) through experience with conspecifics can facilitate the acquisition of many behaviors. Thus, when Mongolian gerbils are exposed to a demonstrator performing an auditory ...discrimination task, their subsequent task acquisition is facilitated, even in the absence of visual cues. Here, we show that transient inactivation of auditory cortex (AC) during exposure caused a significant delay in task acquisition during the subsequent practice phase, suggesting that AC activity is necessary for SL. Moreover, social exposure induced an improvement in AC neuron sensitivity to auditory task cues. The magnitude of neural change during exposure correlated with task acquisition during practice. In contrast, exposure to only auditory task cues led to poorer neurometric and behavioral outcomes. Finally, social information during exposure was encoded in the AC of observer animals. Together, our results suggest that auditory SL is supported by AC neuron plasticity occurring during social exposure and prior to behavioral performance.
This study investigated a combination of eight embedded performance validity tests (PVTs) derived from commonly administered neuropsychological tests to optimize sensitivity/specificity for detecting ...invalid neuropsychological test performance. The goal of this study was to evaluate what combination of these common embedded PVTs that have the most robust predictive power for detecting invalid neuropsychological test performance in a single diverse clinical sample.
Eight previously validated memory- and nonmemory-based embedded PVTs were examined among 231 patients undergoing neuropsychological evaluation. Patients were classified into valid/invalid groups based on four independent criterion PVTs. Embedded PVT accuracy was assessed using standard and stepwise multiple logistic regression models.
Three PVTs, the Brief Visuospatial Memory Test-Revised Recognition Discrimination (BVMT-R-RD), Rey Auditory Verbal Learning Test Forced Choice, and WAIS-IV Digit Span Age Corrected Scaled Score, predicted 45.5% of the variance in validity group membership. BVMT-RD independently accounted for 32% of the variance in prediction of independent, criterion-defined validity group membership.
This study demonstrated the incremental predictive power of multiple embedded PVTs derived from common neuropsychological measures in detecting invalid test performance and those measures accounting for the greatest portion of the variance. These results provide guidance for evaluating the most fruitful embedded PVTs and proof of concept to better guide selection of embedded validity indices. Further, this offers clinicians an efficient, empirically derived approach to assessing performance validity when time restraints potentially limit the use of freestanding PVTs.
How does prior linguistic knowledge modulate learning in verbal auditory statistical learning (SL) tasks? Here, we address this question by assessing to what extent the frequency of syllabic ...co‐occurrences in the learners’ native language determines SL performance. We computed the frequency of co‐occurrences of syllables in spoken Spanish through a transliterated corpus, and used this measure to construct two artificial familiarization streams. One stream was constructed by embedding pseudowords with high co‐occurrence frequency in Spanish (“Spanish‐like” condition), the other by embedding pseudowords with low co‐occurrence frequency (“Spanish‐unlike” condition). Native Spanish‐speaking participants listened to one of the two streams, and were tested in an old/new identification task to examine their ability to discriminate the embedded pseudowords from foils. Our results show that performance in the verbal auditory SL (ASL) task was significantly influenced by the frequency of syllabic co‐occurrences in Spanish: When the embedded pseudowords were more “Spanish‐like,” participants were better able to identify them as part of the stream. These findings demonstrate that learners’ task performance in verbal ASL tasks changes as a function of the artificial language's similarity to their native language, and highlight how linguistic prior knowledge biases the learning of regularities.