•A method for segmenting white matter lesions and dozens of brain structures in MS.•The method is adaptive to different scanners and MRI sequences.•It can be used to quantify brain volumes without ...resorting to lesion-filling.•The method is publicly available as part of FreeSurfer.
Here we present a method for the simultaneous segmentation of white matter lesions and normal-appearing neuroanatomical structures from multi-contrast brain MRI scans of multiple sclerosis patients. The method integrates a novel model for white matter lesions into a previously validated generative model for whole-brain segmentation. By using separate models for the shape of anatomical structures and their appearance in MRI, the algorithm can adapt to data acquired with different scanners and imaging protocols without retraining. We validate the method using four disparate datasets, showing robust performance in white matter lesion segmentation while simultaneously segmenting dozens of other brain structures. We further demonstrate that the contrast-adaptive method can also be safely applied to MRI scans of healthy controls, and replicate previously documented atrophy patterns in deep gray matter structures in MS. The algorithm is publicly available as part of the open-source neuroimaging package FreeSurfer.
The hippocampal formation is a complex, heterogeneous structure that consists of a number of distinct, interacting subregions. Atrophy of these subregions is implied in a variety of neurodegenerative ...diseases, most prominently in Alzheimer's disease (AD). Thanks to the increasing resolution of MR images and computational atlases, automatic segmentation of hippocampal subregions is becoming feasible in MRI scans. Here we introduce a generative model for dedicated longitudinal segmentation that relies on subject-specific atlases. The segmentations of the scans at the different time points are jointly computed using Bayesian inference. All time points are treated the same to avoid processing bias. We evaluate this approach using over 4700 scans from two publicly available datasets (ADNI and MIRIAD). In test–retest reliability experiments, the proposed method yielded significantly lower volume differences and significantly higher Dice overlaps than the cross-sectional approach for nearly every subregion (average across subregions: 4.5% vs. 6.5%, Dice overlap: 81.8% vs. 75.4%). The longitudinal algorithm also demonstrated increased sensitivity to group differences: in MIRIAD (69 subjects: 46 with AD and 23 controls), it found differences in atrophy rates between AD and controls that the cross sectional method could not detect in a number of subregions: right parasubiculum, left and right presubiculum, right subiculum, left dentate gyrus, left CA4, left HATA and right tail. In ADNI (836 subjects: 369 with AD, 215 with early cognitive impairment — eMCI — and 252 controls), all methods found significant differences between AD and controls, but the proposed longitudinal algorithm detected differences between controls and eMCI and differences between eMCI and AD that the cross sectional method could not find: left presubiculum, right subiculum, left and right parasubiculum, left and right HATA. Moreover, many of the differences that the cross-sectional method already found were detected with higher significance. The presented algorithm will be made available as part of the open-source neuroimaging package FreeSurfer.
•A segmentation method for the hippocampal substructures in longitudinal MRI scans•Increased test–retest reliability compared with cross-sectional analysis•Increased power to detect group differences in atrophy rates in LME framework•Algorithm will be made publicly available as part of FreeSurfer
The human thalamus is a brain structure that comprises numerous, highly specific nuclei. Since these nuclei are known to have different functions and to be connected to different areas of the ...cerebral cortex, it is of great interest for the neuroimaging community to study their volume, shape and connectivity in vivo with MRI. In this study, we present a probabilistic atlas of the thalamic nuclei built using ex vivo brain MRI scans and histological data, as well as the application of the atlas to in vivo MRI segmentation. The atlas was built using manual delineation of 26 thalamic nuclei on the serial histology of 12 whole thalami from six autopsy samples, combined with manual segmentations of the whole thalamus and surrounding structures (caudate, putamen, hippocampus, etc.) made on in vivo brain MR data from 39 subjects. The 3D structure of the histological data and corresponding manual segmentations was recovered using the ex vivo MRI as reference frame, and stacks of blockface photographs acquired during the sectioning as intermediate target. The atlas, which was encoded as an adaptive tetrahedral mesh, shows a good agreement with previous histological studies of the thalamus in terms of volumes of representative nuclei. When applied to segmentation of in vivo scans using Bayesian inference, the atlas shows excellent test-retest reliability, robustness to changes in input MRI contrast, and ability to detect differential thalamic effects in subjects with Alzheimer's disease. The probabilistic atlas and companion segmentation tool are publicly available as part of the neuroimaging package FreeSurfer.
Display omitted
•A probabilistic atlas of the human thalamus (26 nuclei) derived from histology.•3D histology reconstruction assisted by ex vivo MRI and blockface photographs.•Companion Bayesian method segments the nuclei from in vivo scans of any MRI contrast.•Method has excellent test-retest reliability and detects differential effects in AD.•The method is publicly available as part of FreeSurfer.
With the advent of convolutional neural networks (CNN), supervised learning methods are increasingly being used for whole brain segmentation. However, a large, manually annotated training dataset of ...labeled brain images required to train such supervised methods is frequently difficult to obtain or create. In addition, existing training datasets are generally acquired with a homogeneous magnetic resonance imaging (MRI) acquisition protocol. CNNs trained on such datasets are unable to generalize on test data with different acquisition protocols. Modern neuroimaging studies and clinical trials are necessarily multi-center initiatives with a wide variety of acquisition protocols. Despite stringent protocol harmonization practices, it is very difficult to standardize the gamut of MRI imaging parameters across scanners, field strengths, receive coils etc., that affect image contrast. In this paper we propose a CNN-based segmentation algorithm that, in addition to being highly accurate and fast, is also resilient to variation in the input acquisition. Our approach relies on building approximate forward models of pulse sequences that produce a typical test image. For a given pulse sequence, we use its forward model to generate plausible, synthetic training examples that appear as if they were acquired in a scanner with that pulse sequence. Sampling over a wide variety of pulse sequences results in a wide variety of augmented training examples that help build an image contrast invariant model. Our method trains a single CNN that can segment input MRI images with acquisition parameters as disparate as T1-weighted and T2-weighted contrasts with only T1-weighted training data. The segmentations generated are highly accurate with state-of-the-art results (overall Dice overlap=0.94), with a fast run time (≈ 45 s), and consistent across a wide range of acquisition protocols.
Transcranial brain stimulation (TBS) has been established as a method for modulating and mapping the function of the human brain, and as a potential treatment tool in several brain disorders. ...Typically, the stimulation is applied using a one-size-fits-all approach with predetermined locations for the electrodes, in electric stimulation (TES), or the coil, in magnetic stimulation (TMS), which disregards anatomical variability between individuals. However, the induced electric field distribution in the head largely depends on anatomical features implying the need for individually tailored stimulation protocols for focal dosing. This requires detailed models of the individual head anatomy, combined with electric field simulations, to find an optimal stimulation protocol for a given cortical target. Considering the anatomical and functional complexity of different brain disorders and pathologies, it is crucial to account for the anatomical variability in order to translate TBS from a research tool into a viable option for treatment.
In this article we present a new method, called CHARM, for automated segmentation of fifteen different head tissues from magnetic resonance (MR) scans. The new method compares favorably to two freely available software tools on a five-tissue segmentation task, while obtaining reasonable segmentation accuracy over all fifteen tissues. The method automatically adapts to variability in the input scans and can thus be directly applied to clinical or research scans acquired with different scanners, sequences or settings. We show that an increase in automated segmentation accuracy results in a lower relative error in electric field simulations when compared to anatomical head models constructed from reference segmentations. However, also the improved segmentations and, by implication, the electric field simulations are affected by systematic artifacts in the input MR scans. As long as the artifacts are unaccounted for, this can lead to local simulation differences up to 30% of the peak field strength on reference simulations. Finally, we exemplarily demonstrate the effect of including all fifteen tissue classes in the field simulations against the standard approach of using only five tissue classes and show that for specific stimulation configurations the local differences can reach 10% of the peak field strength.
•We introduce a new automated method for whole-head segmentation.•The method segments 15 different head tissues covering also the neck.•The segmentation accuracy and robustness compare favorably to existing tools.•Choice of scan parameters can cause segmentation and simulation errors up to 30%.•Including extra tissues into the simulation affects the electric field locally.
Display omitted
•An improved inference method for Bayesian segmentation using MCMC sampling.•The sampling is used to approximate the integral over model parameters.•We tested the method in a AD ...classification task using hippocampal subfield volumes.•The method outperforms using point estimates of the parameters in the classification.•The framework also provides informative error bars on the volume estimates.
Many segmentation algorithms in medical image analysis use Bayesian modeling to augment local image appearance with prior anatomical knowledge. Such methods often contain a large number of free parameters that are first estimated and then kept fixed during the actual segmentation process. However, a faithful Bayesian analysis would marginalize over such parameters, accounting for their uncertainty by considering all possible values they may take. Here we propose to incorporate this uncertainty into Bayesian segmentation methods in order to improve the inference process. In particular, we approximate the required marginalization over model parameters using computationally efficient Markov chain Monte Carlo techniques. We illustrate the proposed approach using a recently developed Bayesian method for the segmentation of hippocampal subfields in brain MRI scans, showing a significant improvement in an Alzheimer’s disease classification task. As an additional benefit, the technique also allows one to compute informative “error bars” on the volume estimates of individual structures.
Display omitted
•A label fusion framework based on a generative model that works across modalities.•The registrations are not precomputed, but estimated during the fusion.•The registrations are ...explicitly linked in the generative model.
Multi-atlas label fusion is a powerful image segmentation strategy that is becoming increasingly popular in medical imaging. A standard label fusion algorithm relies on independently computed pairwise registrations between individual atlases and the (target) image to be segmented. These registrations are then used to propagate the atlas labels to the target space and fuse them into a single final segmentation. Such label fusion schemes commonly rely on the similarity between intensity values of the atlases and target scan, which is often problematic in medical imaging – in particular, when the atlases and target images are obtained via different sensor types or imaging protocols.
In this paper, we present a generative probabilistic model that yields an algorithm for solving the atlas-to-target registrations and label fusion steps simultaneously. The proposed model does not directly rely on the similarity of image intensities. Instead, it exploits the consistency of voxel intensities within the target scan to drive the registration and label fusion, hence the atlases and target image can be of different modalities. Furthermore, the framework models the joint warp of all the atlases, introducing interdependence between the registrations.
We use variational expectation maximization and the Demons registration framework in order to efficiently identify the most probable segmentation and registrations. We use two sets of experiments to illustrate the approach, where proton density (PD) MRI atlases are used to segment T1-weighted brain scans and vice versa. Our results clearly demonstrate the accuracy gain due to exploiting within-target intensity consistency and integrating registration into label fusion.
Purpose:
In radiotherapy based only on magnetic resonance imaging (MRI), knowledge about tissue electron densities must be derived from the MRI. This can be achieved by converting the MRI scan to the ...so-called pseudo-computed tomography (pCT). An obstacle is that the voxel intensities in conventional MRI scans are not uniquely related to electron density. The authors previously demonstrated that a patch-based method could produce accurate pCTs of the brain using conventional T
1-weighted MRI scans. The method was driven mainly by local patch similarities and relied on simple affine registrations between an atlas database of the co-registered MRI/CT scan pairs and the MRI scan to be converted. In this study, the authors investigate the applicability of the patch-based approach in the pelvis. This region is challenging for a method based on local similarities due to the greater inter-patient variation. The authors benchmark the method against a baseline pCT strategy where all voxels inside the body contour are assigned a water-equivalent bulk density. Furthermore, the authors implement a parallelized approximate patch search strategy to speed up the pCT generation time to a more clinically relevant level.
Methods:
The data consisted of CT and T
1-weighted MRI scans of 10 prostate patients. pCTs were generated using an approximate patch search algorithm in a leave-one-out fashion and compared with the CT using frequently described metrics such as the voxel-wise mean absolute error (MAEvox) and the deviation in water-equivalent path lengths. Furthermore, the dosimetric accuracy was tested for a volumetric modulated arc therapy plan using dose–volume histogram (DVH) point deviations and γ-index analysis.
Results:
The patch-based approach had an average MAEvox of 54 HU; median deviations of less than 0.4% in relevant DVH points and a γ-index pass rate of 0.97 using a 1%/1 mm criterion. The patch-based approach showed a significantly better performance than the baseline water pCT in almost all metrics. The approximate patch search strategy was 70x faster than a brute-force search, with an average prediction time of 20.8 min.
Conclusions:
The authors showed that a patch-based method based on affine registrations and T
1-weighted MRI could generate accurate pCTs of the pelvis. The main source of differences between pCT and CT was positional changes of air pockets and body outline.
We present open-source tools for three-dimensional (3D) analysis of photographs of dissected slices of human brains, which are routinely acquired in brain banks but seldom used for quantitative ...analysis. Our tools can: (1) 3D reconstruct a volume from the photographs and, optionally, a surface scan; and (2) produce a high-resolution 3D segmentation into 11 brain regions per hemisphere (22 in total), independently of the slice thickness. Our tools can be used as a substitute for ex vivo magnetic resonance imaging (MRI), which requires access to an MRI scanner, ex vivo scanning expertise, and considerable financial resources. We tested our tools on synthetic and real data from two NIH Alzheimer’s Disease Research Centers. The results show that our methodology yields accurate 3D reconstructions, segmentations, and volumetric measurements that are highly correlated to those from MRI. Our method also detects expected differences between post mortem confirmed Alzheimer’s disease cases and controls. The tools are available in our widespread neuroimaging suite ‘FreeSurfer’ ( https://surfer.nmr.mgh.harvard.edu/fswiki/PhotoTools ).
Every year, thousands of human brains are donated to science. These brains are used to study normal aging, as well as neurological diseases like Alzheimer’s or Parkinson’s. Donated brains usually go to ‘brain banks’, institutions where the brains are dissected to extract tissues relevant to different diseases. During this process, it is routine to take photographs of brain slices for archiving purposes. Often, studies of dead brains rely on qualitative observations, such as ‘the hippocampus displays some atrophy’, rather than concrete ‘numerical’ measurements. This is because the gold standard to take three-dimensional measurements of the brain is magnetic resonance imaging (MRI), which is an expensive technique that requires high expertise – especially with dead brains. The lack of quantitative data means it is not always straightforward to study certain conditions. To bridge this gap, Gazula et al. have developed an openly available software that can build three-dimensional reconstructions of dead brains based on photographs of brain slices. The software can also use machine learning methods to automatically extract different brain regions from the three-dimensional reconstructions and measure their size. These data can be used to take precise quantitative measurements that can be used to better describe how different conditions lead to changes in the brain, such as atrophy (reduced volume of one or more brain regions). The researchers assessed the accuracy of the method in two ways. First, they digitally sliced MRI-scanned brains and used the software to compute the sizes of different structures based on these synthetic data, comparing the results to the known sizes. Second, they used brains for which both MRI data and dissection photographs existed and compared the measurements taken by the software to the measurements obtained with MRI images. Gazula et al. show that, as long as the photographs satisfy some basic conditions, they can provide good estimates of the sizes of many brain structures. The tools developed by Gazula et al. are publicly available as part of FreeSurfer, a widespread neuroimaging software that can be used by any researcher working at a brain bank. This will allow brain banks to obtain accurate measurements of dead brains, allowing them to cheaply perform quantitative studies of brain structures, which could lead to new findings relating to neurodegenerative diseases.
Purpose:
In radiotherapy (RT) based on magnetic resonance imaging (MRI) as the only modality, the information on electron density must be derived from the MRI scan by creating a so‐called pseudo ...computed tomography (pCT). This is a nontrivial task, since the voxel‐intensities in an MRI scan are not uniquely related to electron density. To solve the task, voxel‐based or atlas‐based models have typically been used. The voxel‐based models require a specialized dual ultrashort echo time MRI sequence for bone visualization and the atlas‐based models require deformable registrations of conventional MRI scans. In this study, we investigate the potential of a patch‐based method for creating a pCT based on conventional T1‐weighted MRI scans without using deformable registrations. We compare this method against two state‐of‐the‐art methods within the voxel‐based and atlas‐based categories.
Methods:
The data consisted of CT and MRI scans of five cranial RT patients. To compare the performance of the different methods, a nested cross validation was done to find optimal model parameters for all the methods. Voxel‐wise and geometric evaluations of the pCTs were done. Furthermore, a radiologic evaluation based on water equivalent path lengths was carried out, comparing the upper hemisphere of the head in the pCT and the real CT. Finally, the dosimetric accuracy was tested and compared for a photon treatment plan.
Results:
The pCTs produced with the patch‐based method had the best voxel‐wise, geometric, and radiologic agreement with the real CT, closely followed by the atlas‐based method. In terms of the dosimetric accuracy, the patch‐based method had average deviations of less than 0.5% in measures related to target coverage.
Conclusions:
We showed that a patch‐based method could generate an accurate pCT based on conventional T1‐weighted MRI sequences and without deformable registrations. In our evaluations, the method performed better than existing voxel‐based and atlas‐based methods and showed a promising potential for RT of the brain based only on MRI.