A small number of blind people are adept at echolocating silent objects simply by producing mouth clicks and listening to the returning echoes. Yet the neural architecture underlying this type of ...aid-free human echolocation has not been investigated. To tackle this question, we recruited echolocation experts, one early- and one late-blind, and measured functional brain activity in each of them while they listened to their own echolocation sounds.
When we compared brain activity for sounds that contained both clicks and the returning echoes with brain activity for control sounds that did not contain the echoes, but were otherwise acoustically matched, we found activity in calcarine cortex in both individuals. Importantly, for the same comparison, we did not observe a difference in activity in auditory cortex. In the early-blind, but not the late-blind participant, we also found that the calcarine activity was greater for echoes reflected from surfaces located in contralateral space. Finally, in both individuals, we found activation in middle temporal and nearby cortical regions when they listened to echoes reflected from moving targets.
These findings suggest that processing of click-echoes recruits brain regions typically devoted to vision rather than audition in both early and late blind echolocation experts.
Word in noise identification is facilitated by acoustic differences between target and competing sounds and temporal separation between the onset of the masker and that of the target. Younger and ...older adults are able to take advantage of onset delay when the masker is dissimilar (Noise) to the target word, but only younger adults are able to do so when the masker is similar (Babble). We examined the neural underpinning of this age difference using cortical evoked responses to words masked by either Babble or Noise when the masker preceded the target word by 100 or 600 ms in younger and older adults, after adjusting the signal-to-noise ratios (SNRs) to equate behavioural performance across age groups and conditions. For the 100 ms onset delay, the word in noise elicited an acoustic change complex (ACC) response that was comparable in younger and older adults. For the 600 ms onset delay, the ACC was modulated by both masker type and age. In older adults, the ACC to a word in babble was not affected by the increase in onset delay whereas younger adults showed a benefit from longer delays. Hence, the age difference in sensitivity to temporal delay is indexed by early activity in the auditory cortex. These results are consistent with the hypothesis that an increase in onset delay improves stream segregation in younger adults in both noise and babble, but only in noise for older adults and that this change in stream segregation is evident in early cortical processes.
The processing of brain diffusion tensor imaging (DTI) data for large cohort studies requires fully automatic pipelines to perform quality control (QC) and artifact/outlier removal procedures on the ...raw DTI data prior to calculation of diffusion parameters. In this study, three automatic DTI processing pipelines, each complying with the general ENIGMA framework, were designed by uniquely combining multiple image processing software tools. Different QC procedures based on the RESTORE algorithm, the DTIPrep protocol, and a combination of both methods were compared using simulated ground truth and artifact containing DTI datasets modeling eddy current induced distortions, various levels of motion artifacts, and thermal noise. Variability was also examined in 20 DTI datasets acquired in subjects with vascular cognitive impairment (VCI) from the multi-site Ontario Neurodegenerative Disease Research Initiative (ONDRI). The mean fractional anisotropy (FA), mean diffusivity (MD), axial diffusivity (AD), and radial diffusivity (RD) were calculated in global brain grey matter (GM) and white matter (WM) regions. For the simulated DTI datasets, the measure used to evaluate the performance of the pipelines was the normalized difference between the mean DTI metrics measured in GM and WM regions and the corresponding ground truth DTI value. The performance of the proposed pipelines was very similar, particularly in FA measurements. However, the pipeline based on the RESTORE algorithm was the most accurate when analyzing the artifact containing DTI datasets. The pipeline that combined the DTIPrep protocol and the RESTORE algorithm produced the lowest standard deviation in FA measurements in normal appearing WM across subjects. We concluded that this pipeline was the most robust and is preferred for automated analysis of multisite brain DTI data.
A particularly prominent model of auditory cortical function proposes that a dorsal brain pathway, emanating from the posterior auditory cortex, is primarily concerned with processing the spatial ...features of sounds. In the present paper, we outline some difficulties with a strict functional interpretation of this pathway, and highlight the recent trend to understand this pathway in terms of one that uses acoustic information to guide motor output towards objects of interest. In this spirit, we consider the possibility that some of the auditory spatial processing activity that has been observed in the dorsal pathway may actually be understood as a form of action processing in which the visual system may be guided to a particular location of interest. In this regard, attentional orientation may be considered a low-level form of action planning. Incorporating an auditory-guided motor aspect to the dorsal pathway not only offers a more holistic account of auditory processing, but also provides a more ecologically valid perspective on auditory processing in dorsal brain regions.
"What" and "Where" in the Human Auditory System Alain, Claude; Arnott, Stephen R.; Hevenor, Stephanie ...
Proceedings of the National Academy of Sciences - PNAS,
10/2001, Letnik:
98, Številka:
21
Journal Article
Recenzirano
Odprti dostop
The extent to which sound identification and sound localization depend on specialized auditory pathways was examined by using functional magnetic resonance imaging and event-related brain potentials. ...Participants performed an S1-S2 match-to-sample task in which S1 differed from S2 in its pitch and/or location. In the pitch task, participants indicated whether S2 was lower, identical, or higher in pitch than S1. In the location task, participants were asked to localize S2 relative to S1 (i.e., leftward, same, or rightward). Relative to location, pitch processing generated greater activation in auditory cortex and the inferior frontal gyrus. Conversely, identifying the location of S2 relative to S1 generated greater activation in posterior temporal cortex, parietal cortex, and the superior frontal sulcus. Differential task-related effects on event-related brain potentials (ERPs) were seen in anterior and posterior brain regions beginning at 300 ms poststimulus and lasting for several hundred milliseconds. The converging evidence from two independent measurements of dissociable brain activity during identification and localization of identical stimuli provides strong support for specialized auditory streams in the human brain. These findings are analogous to the "what" and "where" segregation of visual information processing, and suggest that a similar functional organization exists for processing information from the auditory modality.
Subtle changes in hippocampal volumes may occur during both physiological and pathophysiological processes in the human brain. Assessing hippocampal volumes manually is a time-consuming procedure, ...however, creating a need for automated segmentation methods that are both fast and reliable over time. Segmentation algorithms that employ deep convolutional neural networks (CNN) have emerged as a promising solution for large longitudinal neuroimaging studies. However, for these novel algorithms to be useful in clinical studies, the accuracy and reproducibility should be established on independent datasets.
Here, we evaluate the performance of a CNN-based hippocampal segmentation algorithm that was developed by Thyreau and colleagues – Hippodeep. We compared its segmentation outputs to manual segmentation and FreeSurfer 6.0 in a sample of 200 healthy participants scanned repeatedly at seven sites across Canada, as part of the Canadian Biomarker Integration Network in Depression consortium. The algorithm demonstrated high levels of stability and reproducibility of volumetric measures across all time points compared to the other two techniques. Although more rigorous testing in clinical populations is necessary, this approach holds promise as a viable option for tracking volumetric changes in longitudinal neuroimaging studies.
•Hippodeep demonstrated high stability of measures across all time-points.•Hippodeep had better agreement with manual segmentations than those of FreeSurfer.•Deep neural network performed better on problematic scans as compared to FreeSurfer.
Speech-in-noise (SIN) comprehension deficits in older adults have been linked to changes in both subcortical and cortical auditory evoked responses. However, older adults' difficulty understanding ...SIN may also be related to an imbalance in signal transmission (i.e., functional connectivity) between brainstem and auditory cortices. By modeling high-density scalp recordings of speech-evoked responses with sources in brainstem (BS) and bilateral primary auditory cortices (PAC), we show that beyond attenuating neural activity, hearing loss in older adults compromises the transmission of speech information between subcortical and early cortical hubs of the speech network. We found that the strength of afferent BS→PAC neural signaling (but not the reverse efferent flow; PAC→BS) varied with mild declines in hearing acuity and this “bottom-up” functional connectivity robustly predicted older adults’ performance in a SIN identification task. Connectivity was also a better predictor of SIN processing than unitary subcortical or cortical responses alone. Our neuroimaging findings suggest that in older adults (i) mild hearing loss differentially reduces neural output at several stages of auditory processing (PAC > BS), (ii) subcortical-cortical connectivity is more sensitive to peripheral hearing loss than top-down (cortical-subcortical) control, and (iii) reduced functional connectivity in afferent auditory pathways plays a significant role in SIN comprehension problems.
•Measured source brainstem and cortical speech-evoked potentials in older adults.•Hearing loss alters functional connectivity from brainstem to auditory cortex.•Afferent (not efferent) BS.→PAC signaling predicts speech-in-noise perception•Subcortical-cortical connectivity more sensitive to hearing insult than top-down signaling.
Speech comprehension difficulties are ubiquitous to aging and hearing loss, particularly in noisy environments. Older adults’ poorer speech-in-noise (SIN) comprehension has been related to abnormal ...neural representations within various nodes (regions) of the speech network, but how senescent changes in hearing alter the transmission of brain signals remains unspecified. We measured electroencephalograms in older adults with and without mild hearing loss during a SIN identification task. Using functional connectivity and graph-theoretic analyses, we show that hearing-impaired (HI) listeners have more extended (less integrated) communication pathways and less efficient information exchange among widespread brain regions (larger network eccentricity) than their normal-hearing (NH) peers. Parameter optimized support vector machine classifiers applied to EEG connectivity data showed hearing status could be decoded (> 85% accuracy) solely using network-level descriptions of brain activity, but classification was particularly robust using left hemisphere connections. Notably, we found a reversal in directed neural signaling in left hemisphere dependent on hearing status among specific connections within the dorsal–ventral speech pathways. NH listeners showed an overall net “bottom-up” signaling directed from auditory cortex (A1) to inferior frontal gyrus (IFG; Broca’s area), whereas the HI group showed the reverse signal (i.e., “top-down” Broca’s → A1). A similar flow reversal was noted between left IFG and motor cortex. Our full-brain connectivity results demonstrate that even mild forms of hearing loss alter how the brain routes information within the auditory–linguistic–motor loop.
Quality assurance (QA) is crucial in longitudinal and/or multi-site studies, which involve the collection of data from a group of subjects over time and/or at different locations. It is important to ...regularly monitor the performance of the scanners over time and at different locations to detect and control for intrinsic differences (e.g., due to manufacturers) and changes in scanner performance (e.g., due to gradual component aging, software and/or hardware upgrades, etc.). As part of the Ontario Neurodegenerative Disease Research Initiative (ONDRI) and the Canadian Biomarker Integration Network in Depression (CAN-BIND), QA phantom scans were conducted approximately monthly for three to four years at 13 sites across Canada with 3T research MRI scanners. QA parameters were calculated for each scan using the functional Biomarker Imaging Research Network's (fBIRN) QA phantom and pipeline to capture between- and within-scanner variability. We also describe a QA protocol to measure the full-width-at-half-maximum (FWHM) of slice-wise point spread functions (PSF), used in conjunction with the fBIRN QA parameters. Variations in image resolution measured by the FWHM are a primary source of variance over time for many sites, as well as between sites and between manufacturers. We also identify an unexpected range of instabilities affecting individual slices in a number of scanners, which may amount to a substantial contribution of unexplained signal variance to their data. Finally, we identify a preliminary preprocessing approach to reduce this variance and/or alleviate the slice anomalies, and in a small human data set show that this change in preprocessing can have a significant impact on seed-based connectivity measurements for some individual subjects. We expect that other fMRI centres will find this approach to identifying and controlling scanner instabilities useful in similar studies.
Evidence from anatomical and neurophysiological studies in nonhuman primates suggests a dual-pathway model of auditory processing wherein sound identity and sound location information are segregated ...along ventral and dorsal streams, respectively. The present meta-analysis reviewed evidence from auditory functional magnetic resonance imaging (fMRI) and positron emission tomography (PET) studies to determine the reliability of this model in humans. Activation coordinates from 11 “spatial” studies (i.e., listeners made localization judgements on sounds that could occur at two or more perceptually different positions) and 27 “nonspatial” studies (i.e., listeners completed nonspatial tasks involving sounds presented from the same location) were entered into the analysis. All but one of the spatial studies reported activation within the inferior parietal lobule as opposed to only 41% of the nonspatial studies. In addition, 55% of spatial studies reported activity around the superior frontal sulcus as opposed to only 7% of the nonspatial studies. In comparison, inferior frontal activity (Brodmann's areas 45 and 47) was reported in only 9% of the spatial studies, but in 56% of the nonspatial studies. Finally, almost all temporal lobe activity observed during spatial tasks was confined to posterior areas, whereas nonspatial activity was distributed throughout the temporal lobe. These results support an auditory dual-pathway model in humans in which nonspatial sound information (e.g., sound identity) is processed primarily along the ventral stream whereas sound location is processed along the dorsal stream and areas posterior to primary auditory cortex.