Brain extraction is an important step in the analysis of brain images. The variability in brain morphology and the difference in intensity characteristics due to imaging sequences make the ...development of a general purpose brain extraction algorithm challenging. To address this issue, we propose a new robust method (BEaST) dedicated to produce consistent and accurate brain extraction. This method is based on nonlocal segmentation embedded in a multi-resolution framework. A library of 80 priors is semi-automatically constructed from the NIH-sponsored MRI study of normal brain development, the International Consortium for Brain Mapping, and the Alzheimer's Disease Neuroimaging Initiative databases.
In testing, a mean Dice similarity coefficient of 0.9834±0.0053 was obtained when performing leave-one-out cross validation selecting only 20 priors from the library. Validation using the online Segmentation Validation Engine resulted in a top ranking position with a mean Dice coefficient of 0.9781±0.0047. Robustness of BEaST is demonstrated on all baseline ADNI data, resulting in a very low failure rate. The segmentation accuracy of the method is better than two widely used publicly available methods and recent state-of-the-art hybrid approaches. BEaST provides results comparable to a recent label fusion approach, while being 40 times faster and requiring a much smaller library of priors.
Historically, medical imaging has been a qualitative or semi-quantitative modality. It is difficult to quantify what can be seen in an image, and to turn it into valuable predictive outcomes. As a ...result of advances in both computational hardware and machine learning algorithms, computers are making great strides in obtaining quantitative information from imaging and correlating it with outcomes. Radiomics, in its two forms "handcrafted and deep," is an emerging field that translates medical images into quantitative data to yield biological information and enable radiologic phenotypic profiling for diagnosis, theragnosis, decision support, and monitoring. Handcrafted radiomics is a multistage process in which features based on shape, pixel intensities, and texture are extracted from radiographs. Within this review, we describe the steps: starting with quantitative imaging data, how it can be extracted, how to correlate it with clinical and biological outcomes, resulting in models that can be used to make predictions, such as survival, or for detection and classification used in diagnostics. The application of deep learning, the second arm of radiomics, and its place in the radiomics workflow is discussed, along with its advantages and disadvantages. To better illustrate the technologies being used, we provide real-world clinical applications of radiomics in oncology, showcasing research on the applications of radiomics, as well as covering its limitations and its future direction.
Diffusion tensor imaging (DTI) is a well-established magnetic resonance imaging (MRI) technique used for studying microstructural changes in the white matter. As with many other imaging modalities, ...DTI images suffer from technical between-scanner variation that hinders comparisons of images across imaging sites, scanners and over time. Using fractional anisotropy (FA) and mean diffusivity (MD) maps of 205 healthy participants acquired on two different scanners, we show that the DTI measurements are highly site-specific, highlighting the need of correcting for site effects before performing downstream statistical analyses. We first show evidence that combining DTI data from multiple sites, without harmonization, may be counter-productive and negatively impacts the inference. Then, we propose and compare several harmonization approaches for DTI data, and show that ComBat, a popular batch-effect correction tool used in genomics, performs best at modeling and removing the unwanted inter-site variability in FA and MD maps. Using age as a biological phenotype of interest, we show that ComBat both preserves biological variability and removes the unwanted variation introduced by site. Finally, we assess the different harmonization methods in the presence of different levels of confounding between site and age, in addition to test robustness to small sample size studies.
•Significant site and scanner effects exist in DTI scalar maps.•Several multi-site harmonization methods are proposed.•ComBat performs the best at removing site effects in FA and MD.•Voxels associated with age in FA and MD are more replicable after ComBat.•ComBat is generalizable to other imaging modalities.
Whole Slide Imaging: Technology and Applications Hanna, Matthew G; Parwani, Anil; Sirintrapun, Sahussapont Joseph
Advances in anatomic pathology,
2020-July, Letnik:
27, Številka:
4
Journal Article
Recenzirano
Pathology has benefited from advanced innovation with novel technology to implement a digital solution. Whole slide imaging is a disruptive technology where glass slides are scanned to produce ...digital images. There have been significant advances in whole slide scanning hardware and software that have allowed for ready access of whole slide images. The digital images, or whole slide images, can be viewed comparable to glass slides in a microscope, as digital files. Whole slide imaging has increased in adoption among pathologists, pathology departments, and scientists for clinical, educational, and research initiatives. Worldwide usage of whole slide imaging has grown significantly. Pathology regulatory organizations (ie, College of American Pathologists) have put forth guidelines for clinical validation, and the US Food and Drug Administration have also approved whole slide imaging for primary diagnosis. This article will review the digital pathology ecosystem and discuss clinical and nonclinical applications of its use.
As medical imaging enters its information era and presents rapidly increasing needs for big data analytics, robust pooling and harmonization of imaging data across diverse cohorts with varying ...acquisition protocols have become critical. We describe a comprehensive effort that merges and harmonizes a large-scale dataset of 10,477 structural brain MRI scans from participants without a known neurological or psychiatric disorder from 18 different studies that represent geographic diversity. We use this dataset and multi-atlas-based image processing methods to obtain a hierarchical partition of the brain from larger anatomical regions to individual cortical and deep structures and derive age trends of brain structure through the lifespan (3–96 years old). Critically, we present and validate a methodology for harmonizing this pooled dataset in the presence of nonlinear age trends. We provide a web-based visualization interface to generate and present the resulting age trends, enabling future studies of brain structure to compare their data with this reference of brain development and aging, and to examine deviations from ranges, potentially related to disease.
•Multi-site harmonization method that pools volumetric data from 18 studies, controlling for nonlinear age effects.•Resulting dataset covers ages 3 to 96 and used to derive age trends of brain structure through the lifespan.•Interactive visualization tool provided for exploring age trends and comparing new data.
•An open-source platform is implemented based on TensorFlow APIs for deep learning in medical imaging domain.•A modular implementation of the typical medical imaging machine learning pipeline ...facilitates (1) warm starts with established pre-trained networks, (2) adapting existing neural network architectures to new problems, and (3) rapid prototyping of new solutions.•Three deep-learning applications, including segmentation, regression, image generation and representation learning, are presented as concrete examples illustrating the platform’s key features.
Medical image analysis and computer-assisted intervention problems are increasingly being addressed with deep-learning-based solutions. Established deep-learning platforms are flexible but do not provide specific functionality for medical image analysis and adapting them for this domain of application requires substantial implementation effort. Consequently, there has been substantial duplication of effort and incompatible infrastructure developed across many research groups. This work presents the open-source NiftyNet platform for deep learning in medical imaging. The ambition of NiftyNet is to accelerate and simplify the development of these solutions, and to provide a common mechanism for disseminating research outputs for the community to use, adapt and build upon.
The NiftyNet infrastructure provides a modular deep-learning pipeline for a range of medical imaging applications including segmentation, regression, image generation and representation learning applications. Components of the NiftyNet pipeline including data loading, data augmentation, network architectures, loss functions and evaluation metrics are tailored to, and take advantage of, the idiosyncracies of medical image analysis and computer-assisted intervention. NiftyNet is built on the TensorFlow framework and supports features such as TensorBoard visualization of 2D and 3D images and computational graphs by default.
We present three illustrative medical image analysis applications built using NiftyNet infrastructure: (1) segmentation of multiple abdominal organs from computed tomography; (2) image regression to predict computed tomography attenuation maps from brain magnetic resonance images; and (3) generation of simulated ultrasound images for specified anatomical poses.
The NiftyNet infrastructure enables researchers to rapidly develop and distribute deep learning solutions for segmentation, regression, image generation and representation learning applications, or extend the platform to new applications.
The importance of neuronal morphology in brain function has been recognized for over a century. The broad applicability of “digital reconstructions” of neuron morphology across neuroscience ...subdisciplines has stimulated the rapid development of numerous synergistic tools for data acquisition, anatomical analysis, three-dimensional rendering, electrophysiological simulation, growth models, and data sharing. Here we discuss the processes of histological labeling, microscopic imaging, and semiautomated tracing. Moreover, we provide an annotated compilation of currently available resources in this rich research “ecosystem” as a central reference for experimental and computational neuroscience.
Neuronal morphology and brain function are intertwined. In this Primer, Parekh and Ascoli discuss reconstruction of neuronal morphology. They cover the processes of labeling, imaging, and tracing of neurons and provide an annotated compilation of currently available resources for digital reconstruction.
Surface-based cortical registration methods that are driven by geometrical features, such as folding, provide sub-optimal alignment of many functional areas due to variable correlation between ...cortical folding patterns and function. This has led to the proposal of new registration methods using features derived from functional and diffusion imaging. However, as yet there is no consensus over the best set of features for optimal alignment of brain function.
In this paper we demonstrate the utility of a new Multimodal Surface Matching (MSM) algorithm capable of driving alignment using a wide variety of descriptors of brain architecture, function and connectivity. The versatility of the framework originates from adapting the discrete Markov Random Field (MRF) registration method to surface alignment. This has the benefit of being very flexible in the choice of a similarity measure and relatively insensitive to local minima. The method offers significant flexibility in the choice of feature set, and we demonstrate the advantages of this by performing registrations using univariate descriptors of surface curvature and myelination, multivariate feature sets derived from resting fMRI, and multimodal descriptors of surface curvature and myelination. We compare the results with two state of the art surface registration methods that use geometric features: FreeSurfer and Spherical Demons. In the future, the MSM technique will allow explorations into the best combinations of features and alignment strategies for inter-subject alignment of cortical functional areas for a wide range of neuroimaging data sets.
•Comprehensive introduction to 3D deconvolution microscopy.•Description of standard algorithms of deconvolution.•Presentation of the Java open-source software: DeconvolutionLab2.•Benchmark on open ...reference datasets.
Images in fluorescence microscopy are inherently blurred due to the limit of diffraction of light. The purpose of deconvolution microscopy is to compensate numerically for this degradation. Deconvolution is widely used to restore fine details of 3D biological samples. Unfortunately, dealing with deconvolution tools is not straightforward. Among others, end users have to select the appropriate algorithm, calibration and parametrization, while potentially facing demanding computational tasks. To make deconvolution more accessible, we have developed a practical platform for deconvolution microscopy called DeconvolutionLab. Freely distributed, DeconvolutionLab hosts standard algorithms for 3D microscopy deconvolution and drives them through a user-oriented interface. In this paper, we take advantage of the release of DeconvolutionLab2 to provide a complete description of the software package and its built-in deconvolution algorithms. We examine several standard algorithms used in deconvolution microscopy, notably: Regularized inverse filter, Tikhonov regularization, Landweber, Tikhonov–Miller, Richardson–Lucy, and fast iterative shrinkage-thresholding. We evaluate these methods over large 3D microscopy images using simulated datasets and real experimental images. We distinguish the algorithms in terms of image quality, performance, usability and computational requirements. Our presentation is completed with a discussion of recent trends in deconvolution, inspired by the results of the Grand Challenge on deconvolution microscopy that was recently organized.