Many sources of fluctuation contribute to the fMRI signal, and this makes identifying the effects that are truly related to the underlying neuronal activity difficult. Independent component analysis ...(ICA) – one of the most widely used techniques for the exploratory analysis of fMRI data – has shown to be a powerful technique in identifying various sources of neuronally-related and artefactual fluctuation in fMRI data (both with the application of external stimuli and with the subject “at rest”). ICA decomposes fMRI data into patterns of activity (a set of spatial maps and their corresponding time series) that are statistically independent and add linearly to explain voxel-wise time series. Given the set of ICA components, if the components representing “signal” (brain activity) can be distinguished form the “noise” components (effects of motion, non-neuronal physiology, scanner artefacts and other nuisance sources), the latter can then be removed from the data, providing an effective cleanup of structured noise. Manual classification of components is labour intensive and requires expertise; hence, a fully automatic noise detection algorithm that can reliably detect various types of noise sources (in both task and resting fMRI) is desirable. In this paper, we introduce FIX (“FMRIB's ICA-based X-noiseifier”), which provides an automatic solution for denoising fMRI data via accurate classification of ICA components. For each ICA component FIX generates a large number of distinct spatial and temporal features, each describing a different aspect of the data (e.g., what proportion of temporal fluctuations are at high frequencies). The set of features is then fed into a multi-level classifier (built around several different classifiers). Once trained through the hand-classification of a sufficient number of training datasets, the classifier can then automatically classify new datasets. The noise components can then be subtracted from (or regressed out of) the original data, to provide automated cleanup. On conventional resting-state fMRI (rfMRI) single-run datasets, FIX achieved about 95% overall accuracy. On high-quality rfMRI data from the Human Connectome Project, FIX achieves over 99% classification accuracy, and as a result is being used in the default rfMRI processing pipeline for generating HCP connectomes. FIX is publicly available as a plugin for FSL.
•We propose a simulation-based assessment of variance and bias of normative models.•We investigate the impact of sample size, ground truth, fitting model and percentile.•Precise estimation of ...outlying percentiles requires large samples (e.g. N≫1000).•Uncertainty rises greatly at ends of the age range, where fewer data points exist.•We provide an open tool that can be used for the equivalent of power calculations.
Modelling population reference curves or normative modelling is increasingly used with the advent of large neuroimaging studies. In this paper we assess the performance of fitting methods from the perspective of clinical applications and investigate the influence of the sample size. Further, we evaluate linear and non-linear models for percentile curve estimation and highlight how the bias-variance trade-off manifests in typical neuroimaging data.
We created plausible ground truth distributions of hippocampal volumes in the age range of 45 to 80 years, as an example application. Based on these distributions we repeatedly simulated samples for sizes between 50 and 50,000 data points, and for each simulated sample we fitted a range of normative models. We compared the fitted models and their variability across repetitions to the ground truth, with specific focus on the outer percentiles (1st, 5th, 10th) as these are the most clinically relevant.
Our results quantify the expected decreasing trend in variance of the volume estimates with increasing sample size. However, bias in the volume estimates only decreases a modest amount, without much improvement at large sample sizes. The uncertainty of model performance is substantial for what would often be considered large samples in a neuroimaging context and rises dramatically at the ends of the age range, where fewer data points exist. Flexible models perform better across sample sizes, especially for non-linear ground truth.
Surprisingly large samples of several thousand data points are needed to accurately capture outlying percentiles across the age range for applications in research and clinical settings. Performance evaluation methods should assess both bias and variance. Furthermore, caution is needed when attempting to go near the ends of the age range captured by the source data set and, as is a well known general principle, extrapolation beyond the age range should always be avoided. To help with such evaluations of normative models we have made our code available to guide researchers developing or utilising normative models.
We present a practical “how-to” guide to help determine whether single-subject fMRI independent components (ICs) characterise structured noise or not. Manual identification of signal and noise after ...ICA decomposition is required for efficient data denoising: to train supervised algorithms, to check the results of unsupervised ones or to manually clean the data. In this paper we describe the main spatial and temporal features of ICs and provide general guidelines on how to evaluate these. Examples of signal and noise components are provided from a wide range of datasets (3T data, including examples from the UK Biobank and the Human Connectome Project, and 7T data), together with practical guidelines for their identification. Finally, we discuss how the data quality, data type and preprocessing can influence the characteristics of the ICs and present examples of particularly challenging datasets.
Reliable quantification of white matter hyperintensities of presumed vascular origin (WMHs) is increasingly needed, given the presence of these MRI findings in patients with several neurological and ...vascular disorders, as well as in elderly healthy subjects.
We present BIANCA (Brain Intensity AbNormality Classification Algorithm), a fully automated, supervised method for WMH detection, based on the k-nearest neighbour (k-NN) algorithm. Relative to previous k-NN based segmentation methods, BIANCA offers different options for weighting the spatial information, local spatial intensity averaging, and different options for the choice of the number and location of the training points. BIANCA is multimodal and highly flexible so that the user can adapt the tool to their protocol and specific needs.
We optimised and validated BIANCA on two datasets with different MRI protocols and patient populations (a “predominantly neurodegenerative” and a “predominantly vascular” cohort).
BIANCA was first optimised on a subset of images for each dataset in terms of overlap and volumetric agreement with a manually segmented WMH mask. The correlation between the volumes extracted with BIANCA (using the optimised set of options), the volumes extracted from the manual masks and visual ratings showed that BIANCA is a valid alternative to manual segmentation. The optimised set of options was then applied to the whole cohorts and the resulting WMH volume estimates showed good correlations with visual ratings and with age. Finally, we performed a reproducibility test, to evaluate the robustness of BIANCA, and compared BIANCA performance against existing methods.
Our findings suggest that BIANCA, which will be freely available as part of the FSL package, is a reliable method for automated WMH segmentation in large cross-sectional cohort studies.
•BIANCA is a new tool for automated segmentation of white matter hyperintensities.•BIANCA is multimodal, flexible, computationally lean, robust, freely available.•We optimised and validated BIANCA on two different MRI protocols and populations.•WMH volumes derived with BIANCA showed good correlations with visual ratings and age.•BIANCA is promising for application in large cross-sectional cohort studies.
UK Biobank is a large-scale prospective epidemiological study with all data accessible to researchers worldwide. It is currently in the process of bringing back 100,000 of the original participants ...for brain, heart and body MRI, carotid ultrasound and low-dose bone/fat x-ray. The brain imaging component covers 6 modalities (T1, T2 FLAIR, susceptibility weighted MRI, Resting fMRI, Task fMRI and Diffusion MRI). Raw and processed data from the first 10,000 imaged subjects has recently been released for general research access. To help convert this data into useful summary information we have developed an automated processing and QC (Quality Control) pipeline that is available for use by other researchers. In this paper we describe the pipeline in detail, following a brief overview of UK Biobank brain imaging and the acquisition protocol. We also describe several quantitative investigations carried out as part of the development of both the imaging protocol and the processing pipeline.
White matter hyperintensities (WMH) are frequently divided into periventricular (PWMH) and deep (DWMH), and the two classes have been associated with different cognitive, microstructural, and ...clinical correlates. However, although this distinction is widely used in visual ratings scales, how to best anatomically define the two classes is still disputed. In fact, the methods used to define PWMH and DWMH vary significantly between studies, making results difficult to compare. The purpose of this study was twofold: first, to compare four current criteria used to define PWMH and DWMH in a cohort of healthy older adults (mean age: 69.58 ± 5.33 years) by quantifying possible differences in terms of estimated volumes; second, to explore associations between the two WMH sub-classes with cognition, tissue microstructure and cardiovascular risk factors, analysing the impact of different criteria on the specific associations. Our results suggest that the classification criterion used for the definition of PWMH and DWMH should not be considered a major obstacle for the comparison of different studies. We observed that higher PWMH load is associated with reduced cognitive function, higher mean arterial pressure and age. Higher DWMH load is associated with higher body mass index. PWMH have lower fractional anisotropy than DWMH, which also have more heterogeneous microstructure. These findings support the hypothesis that PWMH and DWMH are different entities and that their distinction can provide useful information about healthy and pathological aging processes.
•Classification criteria for periventricular/deep white matter hyperintensities are compared.•The definition of PWMH and DWMH is not a major obstacle for study comparison.•PWMH and DWMH have different functional, microstructural and clinical correlates.•10mm distance rule gave best separation in terms of associations with the tested factors.
•Cardiovascular risk factors are associated with older brain age.•Blood pressure is more strongly associated with white matter compared to gray matter.•Resting state functional connectivity provides ...lower brain-age prediction accuracy.•Brain-age prediction accuracy depends on sample size and age range.
Brain age is becoming a widely applied imaging-based biomarker of neural aging and potential proxy for brain integrity and health. We estimated multimodal and modality-specific brain age in the Whitehall II (WHII) MRI cohort using machine learning and imaging-derived measures of gray matter (GM) morphology, white matter microstructure (WM), and resting state functional connectivity (FC). The results showed that the prediction accuracy improved when multiple imaging modalities were included in the model (R2 = 0.30, 95% CI 0.24, 0.36). The modality-specific GM and WM models showed similar performance (R2 = 0.22 0.16, 0.27 and R2 = 0.24 0.18, 0.30, respectively), while the FC model showed the lowest prediction accuracy (R2 = 0.002 -0.005, 0.008), indicating that the FC features were less related to chronological age compared to structural measures. Follow-up analyses showed that FC predictions were similarly low in a matched sub-sample from UK Biobank, and although FC predictions were consistently lower than GM predictions, the accuracy improved with increasing sample size and age range. Cardiovascular risk factors, including high blood pressure, alcohol intake, and stroke risk score, were each associated with brain aging in the WHII cohort. Blood pressure showed a stronger association with white matter compared to gray matter, while no differences in the associations of alcohol intake and stroke risk with these modalities were observed. In conclusion, machine-learning based brain age prediction can reduce the dimensionality of neuroimaging data to provide meaningful biomarkers of individual brain aging. However, model performance depends on study-specific characteristics including sample size and age range, which may cause discrepancies in findings across studies.
•We introduce a new method for inferring changes in parameters of degenerate models.•Using this method, we can detect changes in parameters of the standard diffusion model with a conventional ...diffusion acquisition.•We showed that extra axonal signal is increased in white matter hyper intensities.
Biophysical models that attempt to infer real-world quantities from data usually have many free parameters. This over-parameterisation can result in degeneracies in model inversion and render parameter estimation ill-posed. However, in many applications, we are not interested in quantifying the parameters per se, but rather in identifying changes in parameters between experimental conditions (e.g. patients vs controls). Here we present a Bayesian framework to make inference on changes in the parameters of biophysical models even when model inversion is degenerate, which we refer to as Bayesian EstimatioN of CHange (BENCH).
We infer the parameter changes in two steps; First, we train models that can estimate the pattern of change in the measurements given any hypothetical direction of change in the parameters using simulations. Next, for any pair of real data sets, we use these pre-trained models to estimate the probability that an observed difference in the data can be explained by each model of change.
BENCH is applicable to any type of data and models and particularly useful for biophysical models with parameter degeneracies, where we can assume the change is sparse. In this paper, we apply the approach in the context of microstructural modelling of diffusion MRI data, where the models are usually over-parameterised and not invertible without injecting strong assumptions. Using simulations, we show that in the context of the standard model of white matter our approach is able to identify changes in microstructural parameters from conventional multi-shell diffusion MRI data. We also apply our approach to a subset of subjects from the UK-Biobank Imaging to identify the dominant standard model parameter change in areas of white matter hyperintensities under the assumption that the standard model holds in white matter hyperintensities.
Parkinson's psychosis (PDP) describes a spectrum of symptoms that may arise in Parkinson's disease (PD) including visual hallucinations (VH). Imaging studies investigating the neural correlates of ...PDP have been inconsistent in their findings, due to differences in study design and limitations of scale. Here we use empirical Bayes harmonisation to pool together structural imaging data from multiple research groups into a large-scale mega-analysis, allowing us to identify cortical regions and networks involved in VH and their relation to receptor binding. Differences of morphometrics analysed show a wider cortical involvement underlying VH than previously recognised, including primary visual cortex and surrounding regions, and the hippocampus, independent of its role in cognitive decline. Structural covariance analyses point to the involvement of the attentional control networks in PD-VH, while associations with receptor density maps suggest neurotransmitter loss may be linked to the cortical changes.