Volumetric registration of brain MRI is routinely used in human neuroimaging, e.g., to align different MRI modalities, to measure change in longitudinal analysis, to map an individual to a template, ...or in registration-based segmentation. Classical registration techniques based on numerical optimization have been very successful in this domain, and are implemented in widespread software suites like ANTs, Elastix, NiftyReg, or DARTEL. Over the last 7-8 years, learning-based techniques have emerged, which have a number of advantages like high computational efficiency, potential for higher accuracy, easy integration of supervision, and the ability to be part of a meta-architectures. However, their adoption in neuroimaging pipelines has so far been almost inexistent. Reasons include: lack of robustness to changes in MRI modality and resolution; lack of robust affine registration modules; lack of (guaranteed) symmetry; and, at a more practical level, the requirement of deep learning expertise that may be lacking at neuroimaging research sites. Here, we present EasyReg, an open-source, learning-based registration tool that can be easily used from the command line without any deep learning expertise or specific hardware. EasyReg combines the features of classical registration tools, the capabilities of modern deep learning methods, and the robustness to changes in MRI modality and resolution provided by our recent work in domain randomization. As a result, EasyReg is: fast; symmetric; diffeomorphic (and thus invertible); agnostic to MRI modality and resolution; compatible with affine and nonlinear registration; and does not require any preprocessing or parameter tuning. We present results on challenging registration tasks, showing that EasyReg is as accurate as classical methods when registering 1 mm isotropic scans within MRI modality, but much more accurate across modalities and resolutions. EasyReg is publicly available as part of FreeSurfer; see https://surfer.nmr.mgh.harvard.edu/fswiki/EasyReg .
•We review the biomedical multi atlas segmentation (MAS) literature.•We present how MAS evolved, and now relates to alternative methods.•We present our perspective on the future of MAS.
Display ...omitted
Multi-atlas segmentation (MAS), first introduced and popularized by the pioneering work of Rohlfing, et al. (2004), Klein, et al. (2005), and Heckemann, et al. (2006), is becoming one of the most widely-used and successful image segmentation techniques in biomedical applications. By manipulating and utilizing the entire dataset of “atlases” (training images that have been previously labeled, e.g., manually by an expert), rather than some model-based average representation, MAS has the flexibility to better capture anatomical variation, thus offering superior segmentation accuracy. This benefit, however, typically comes at a high computational cost. Recent advancements in computer hardware and image processing software have been instrumental in addressing this challenge and facilitated the wide adoption of MAS. Today, MAS has come a long way and the approach includes a wide array of sophisticated algorithms that employ ideas from machine learning, probabilistic modeling, optimization, and computer vision, among other fields. This paper presents a survey of published MAS algorithms and studies that have applied these methods to various biomedical problems. In writing this survey, we have three distinct aims. Our primary goal is to document how MAS was originally conceived, later evolved, and now relates to alternative methods. Second, this paper is intended to be a detailed reference of past research activity in MAS, which now spans over a decade (2003–2014) and entails novel methodological developments and application-specific solutions. Finally, our goal is to also present a perspective on the future of MAS, which, we believe, will be one of the dominant approaches in biomedical image segmentation.
•We describe the process of generating histological sections.•We present artefacts and image processing methods to minimise them.•We survey methods for 3D histology reconstruction.•We highlight ...hybrid approaches and discuss remaining challenges in the field.
Display omitted
Histology permits the observation of otherwise invisible structures of the internal topography of a specimen. Although it enables the investigation of tissues at a cellular level, it is invasive and breaks topology due to cutting. Three-dimensional (3D) reconstruction was thus introduced to overcome the limitations of single-section studies in a dimensional scope. 3D reconstruction finds its roots in embryology, where it enabled the visualisation of spatial relationships of developing systems and organs, and extended to biomedicine, where the observation of individual, stained sections provided only partial understanding of normal and abnormal tissues. However, despite bringing visual awareness, recovering realistic reconstructions is elusive without prior knowledge about the tissue shape.
3D medical imaging made such structural ground truths available. In addition, combining non-invasive imaging with histology unveiled invaluable opportunities to relate macroscopic information to the underlying microscopic properties of tissues through the establishment of spatial correspondences; image registration is one technique that permits the automation of such a process and we describe reconstruction methods that rely on it. It is thereby possible to recover the original topology of histology and lost relationships, gain insight into what affects the signals used to construct medical images (and characterise them), or build high resolution anatomical atlases.
This paper reviews almost three decades of methods for 3D histology reconstruction from serial sections, used in the study of many different types of tissue. We first summarise the process that produces digitised sections from a tissue specimen in order to understand the peculiarity of the data, the associated artefacts and some possible ways to minimise them. We then describe methods for 3D histology reconstruction with and without the help of 3D medical imaging, along with methods of validation and some applications. We finally attempt to identify the trends and challenges that the field is facing, many of which are derived from the cross-disciplinary nature of the problem as it involves the collaboration between physicists, histolopathologists, computer scientists and physicians.
•A publicly available deep learning tool to segment the hypothalamus and its subunits.•Our tool outperforms inter-rater accuracy and approaches intra-rater precision level.•It can robustly generalise ...to unseen heterogeneous datasets.•It yields a rejection rate of less than 1% in a QC analysis performed on 675 scans.•It detects subtle subunit-specific hypothalamic atrophy in Alzheimer’s Disease.
Despite the crucial role of the hypothalamus in the regulation of the human body, neuroimaging studies of this structure and its nuclei are scarce. Such scarcity partially stems from the lack of automated segmentation tools, since manual delineation suffers from scalability and reproducibility issues. Due to the small size of the hypothalamus and the lack of image contrast in its vicinity, automated segmentation is difficult and has been long neglected by widespread neuroimaging packages like FreeSurfer or FSL. Nonetheless, recent advances in deep machine learning are enabling us to tackle difficult segmentation problems with high accuracy. In this paper we present a fully automated tool based on a deep convolutional neural network, for the segmentation of the whole hypothalamus and its subregions from T1-weighted MRI scans. We use aggressive data augmentation in order to make the model robust to T1-weighted MR scans from a wide array of different sources, without any need for preprocessing. We rigorously assess the performance of the presented tool through extensive analyses, including: inter- and intra-rater variability experiments between human observers; comparison of our tool with manual segmentation; comparison with an automated method based on multi-atlas segmentation; assessment of robustness by quality control analysis of a larger, heterogeneous dataset (ADNI); and indirect evaluation with a volumetric study performed on ADNI. The presented model outperforms multi-atlas segmentation scores as well as inter-rater accuracy level, and approaches intra-rater precision. Our method does not require any preprocessing and runs in less than a second on a GPU, and approximately 10 seconds on a CPU. The source code as well as the trained model are publicly available at https://github.com/BBillot/hypothalamus_seg, and will also be distributed with FreeSurfer.
The human thalamus is a brain structure that comprises numerous, highly specific nuclei. Since these nuclei are known to have different functions and to be connected to different areas of the ...cerebral cortex, it is of great interest for the neuroimaging community to study their volume, shape and connectivity in vivo with MRI. In this study, we present a probabilistic atlas of the thalamic nuclei built using ex vivo brain MRI scans and histological data, as well as the application of the atlas to in vivo MRI segmentation. The atlas was built using manual delineation of 26 thalamic nuclei on the serial histology of 12 whole thalami from six autopsy samples, combined with manual segmentations of the whole thalamus and surrounding structures (caudate, putamen, hippocampus, etc.) made on in vivo brain MR data from 39 subjects. The 3D structure of the histological data and corresponding manual segmentations was recovered using the ex vivo MRI as reference frame, and stacks of blockface photographs acquired during the sectioning as intermediate target. The atlas, which was encoded as an adaptive tetrahedral mesh, shows a good agreement with previous histological studies of the thalamus in terms of volumes of representative nuclei. When applied to segmentation of in vivo scans using Bayesian inference, the atlas shows excellent test-retest reliability, robustness to changes in input MRI contrast, and ability to detect differential thalamic effects in subjects with Alzheimer's disease. The probabilistic atlas and companion segmentation tool are publicly available as part of the neuroimaging package FreeSurfer.
Display omitted
•A probabilistic atlas of the human thalamus (26 nuclei) derived from histology.•3D histology reconstruction assisted by ex vivo MRI and blockface photographs.•Companion Bayesian method segments the nuclei from in vivo scans of any MRI contrast.•Method has excellent test-retest reliability and detects differential effects in AD.•The method is publicly available as part of FreeSurfer.
The development of automated tools for brain morphometric analysis in infants has lagged significantly behind analogous tools for adults. This gap reflects the greater challenges in this domain due ...to: 1) a smaller-scaled region of interest, 2) increased motion corruption, 3) regional changes in geometry due to heterochronous growth, and 4) regional variations in contrast properties corresponding to ongoing myelination and other maturation processes. Nevertheless, there is a great need for automated image-processing tools to quantify differences between infant groups and other individuals, because aberrant cortical morphologic measurements (including volume, thickness, surface area, and curvature) have been associated with neuropsychiatric, neurologic, and developmental disorders in children. In this paper we present an automated segmentation and surface extraction pipeline designed to accommodate clinical MRI studies of infant brains in a population 0-2 year-olds. The algorithm relies on a single channel of T1-weighted MR images to achieve automated segmentation of cortical and subcortical brain areas, producing volumes of subcortical structures and surface models of the cerebral cortex. We evaluated the algorithm both qualitatively and quantitatively using manually labeled datasets, relevant comparator software solutions cited in the literature, and expert evaluations. The computational tools and atlases described in this paper will be distributed to the research community as part of the FreeSurfer image analysis package.
•FreeSurfer is a widely used and evolving processing suite for brain MRIs.•Morphometric brain analysis in infants has lagged behind that of adults.•Our novel pipeline accommodates T1-weighted brain MRIs from 0 to 2 year-olds.•Its unified approach is valid across a full age range, without foregoing accuracy.•Similar applications are largely derived from newborns only (often preterm).
Quantitative analysis of magnetic resonance imaging (MRI) scans of the brain requires accurate automated segmentation of anatomical structures. A desirable feature for such segmentation methods is to ...be robust against changes in acquisition platform and imaging protocol. In this paper we validate the performance of a segmentation algorithm designed to meet these requirements, building upon generative parametric models previously used in tissue classification. The method is tested on four different datasets acquired with different scanners, field strengths and pulse sequences, demonstrating comparable accuracy to state-of-the-art methods on T1-weighted scans while being one to two orders of magnitude faster. The proposed algorithm is also shown to be robust against small training datasets, and readily handles images with different MRI contrast as well as multi-contrast data.
•We present a fast and sequence-adaptive whole-brain segmentation algorithm.•The method achieves accuracy comparable to state of the art with low processing times.•The method shows robustness to small training sets.•The method readily handles multi-contrast MR scans.
•A vast literature of subcortical studies is analysed from multiple axes: imaging protocols; regions of interest; methodology.•High-resolution atlases together with machine learning present a great ...opportunity for subcortical exploration with application to many clinical and research areas.•Detailed atlases target multiple typically ignored – small nuclei; machine learning is able to model the complex relationship between appearance and anatomy with fast inference.•Computational requirements, robustness and validation are associated challenges that we need to address for a widespread use of machine-learning in subcortical segmentation.
This paper reviews almost three decades of work on atlasing and segmentation methods for subcortical structures in human brain MRI. In writing this survey, we have three distinct aims. First, to document the evolution of digital subcortical atlases of the human brain, from the early MRI templates published in the nineties, to the complex multi-modal atlases at the subregion level that are available today. Second, to provide a detailed record of related efforts in the automated segmentation front, from earlier atlas-based methods to modern machine learning approaches. And third, to present a perspective on the future of high-resolution atlasing and segmentation of subcortical structures in in vivo human brain MRI, including open challenges and opportunities created by recent developments in machine learning.
In this paper we present a method to segment four brainstem structures (midbrain, pons, medulla oblongata and superior cerebellar peduncle) from 3D brain MRI scans. The segmentation method relies on ...a probabilistic atlas of the brainstem and its neighboring brain structures. To build the atlas, we combined a dataset of 39 scans with already existing manual delineations of the whole brainstem and a dataset of 10 scans in which the brainstem structures were manually labeled with a protocol that was specifically designed for this study. The resulting atlas can be used in a Bayesian framework to segment the brainstem structures in novel scans. Thanks to the generative nature of the scheme, the segmentation method is robust to changes in MRI contrast or acquisition hardware. Using cross validation, we show that the algorithm can segment the structures in previously unseen T1 and FLAIR scans with great accuracy (mean error under 1mm) and robustness (no failures in 383 scans including 168 AD cases). We also indirectly evaluate the algorithm with a experiment in which we study the atrophy of the brainstem in aging. The results show that, when used simultaneously, the volumes of the midbrain, pons and medulla are significantly more predictive of age than the volume of the entire brainstem, estimated as their sum. The results also demonstrate that the method can detect atrophy patterns in the brainstem structures that have been previously described in the literature. Finally, we demonstrate that the proposed algorithm is able to detect differential effects of AD on the brainstem structures. The method will be implemented as part of the popular neuroimaging package FreeSurfer.
Display omitted
•A Bayesian method to segment 4 brainstem structures (midbrain, pons, medulla, SCP).•The method relies on a probabilistic atlas built upon 49 manually labeled scans.•The method is robust to changes in MRI contrast.•Robust segmentations are produced for T1 and FLAIR scans from 3 different datasets.•The method will be made publicly available.
•SynthSR turns clinical scans of different resolution and contrast into 1 mm MPRAGEs.•It relies on a CNN trained on fake images synthesized on the fly at every minibatch.•It can be retrained for any ...combination of resolutions / contrasts without new data.•It enables segmentation, registration, etc with existing software (e.g. FreeSurfer) Code is open source.
Display omitted
Most existing algorithms for automatic 3D morphometry of human brain MRI scans are designed for data with near-isotropic voxels at approximately 1 mm resolution, and frequently have contrast constraints as well-typically requiring T1-weighted images (e.g., MP-RAGE scans). This limitation prevents the analysis of millions of MRI scans acquired with large inter-slice spacing in clinical settings every year. In turn, the inability to quantitatively analyze these scans hinders the adoption of quantitative neuro imaging in healthcare, and also precludes research studies that could attain huge sample sizes and hence greatly improve our understanding of the human brain. Recent advances in convolutional neural networks (CNNs) are producing outstanding results in super-resolution and contrast synthesis of MRI. However, these approaches are very sensitive to the specific combination of contrast, resolution and orientation of the input images, and thus do not generalize to diverse clinical acquisition protocols – even within sites. In this article, we present SynthSR, a method to train a CNN that receives one or more scans with spaced slices, acquired with different contrast, resolution and orientation, and produces an isotropic scan of canonical contrast (typically a 1 mm MP-RAGE). The presented method does not require any preprocessing, beyond rigid coregistration of the input scans. Crucially, SynthSR trains on synthetic input images generated from 3D segmentations, and can thus be used to train CNNs for any combination of contrasts, resolutions and orientations without high-resolution real images of the input contrasts. We test the images generated with SynthSR in an array of common downstream analyses, and show that they can be reliably used for subcortical segmentation and volumetry, image registration (e.g., for tensor-based morphometry), and, if some image quality requirements are met, even cortical thickness morphometry. The source code is publicly available at https://github.com/BBillot/SynthSR.