Purpose
Inter‐scan motion is a substantial source of error in R1 estimation methods based on multiple volumes, for example, variable flip angle (VFA), and can be expected to increase at 7T where B1 ...fields are more inhomogeneous. The established correction scheme does not translate to 7T since it requires a body coil reference. Here we introduce two alternatives that outperform the established method. Since they compute relative sensitivities they do not require body coil images.
Theory
The proposed methods use coil‐combined magnitude images to obtain the relative coil sensitivities. The first method efficiently computes the relative sensitivities via a simple ratio; the second by fitting a more sophisticated generative model.
Methods
R1 maps were computed using the VFA approach. Multiple datasets were acquired at 3T and 7T, with and without motion between the acquisition of the VFA volumes. R1 maps were constructed without correction, with the proposed corrections, and (at 3T) with the previously established correction scheme. The effect of the greater inhomogeneity in the transmit field at 7T was also explored by acquiring B1+ maps at each position.
Results
At 3T, the proposed methods outperform the baseline method. Inter‐scan motion artifacts were also reduced at 7T. However, at 7T reproducibility only converged on that of the no motion condition if position‐specific transmit field effects were also incorporated.
Conclusion
The proposed methods simplify inter‐scan motion correction of R1 maps and are applicable at both 3T and 7T, where a body coil is typically not available. The open‐source code for all methods is made publicly available.
Every year, millions of brain magnetic resonance imaging (MRI) scans are acquired in hospitals across the world. These have the potential to revolutionize our understanding of many neurological ...diseases, but their morphometric analysis has not yet been possible due to their anisotropic resolution. We present an artificial intelligence technique, "SynthSR," that takes clinical brain MRI scans with any MR contrast (T1, T2, etc.), orientation (axial/coronal/sagittal), and resolution and turns them into high-resolution T1 scans that are usable by virtually all existing human neuroimaging tools. We present results on segmentation, registration, and atlasing of >10,000 scans of controls and patients with brain tumors, strokes, and Alzheimer's disease. SynthSR yields morphometric results that are very highly correlated with what one would have obtained with high-resolution T1 scans. SynthSR allows sample sizes that have the potential to overcome the power limitations of prospective research studies and shed new light on the healthy and diseased human brain.
The 18 kDa translocator protein (TSPO) is the main molecular target to image neuroinflammation by positron emission tomography (PET). However, TSPO-PET quantification is complex and none of the ...kinetic modelling approaches has been validated using a voxel-by-voxel comparison of TSPO-PET data with the actual TSPO levels of expression. Here, we present a single case study of binary classification of in vivo PET data to evaluate the statistical performance of different TSPO-PET quantification methods. To that end, we induced a localized and adjustable increase of TSPO levels in a non-human primate brain through a viral-vector strategy. We then performed a voxel-wise comparison of the different TSPO-PET quantification approaches providing parametric 18F-DPA-714 PET images, with co-registered in vitro three-dimensional TSPO immunohistochemistry (3D-IHC) data. A data matrix was extracted from each brain hemisphere, containing the TSPO-IHC and TSPO-PET data for each voxel position. Each voxel was then classified as false or true, positive or negative after comparison of the TSPO-PET measure to the reference 3D-IHC method. Finally, receiver operating characteristic curves (ROC) were calculated for each TSPO-PET quantification method. Our results show that standard uptake value ratios using cerebellum as a reference region (SUVCBL) has the most optimal ROC score amongst all non-invasive approaches.
In biomedical research, cell analysis is important to assess physiological and pathophysiological information. Virtual microscopy offers the unique possibility to study the compositions of tissues at ...a cellular scale. However, images acquired at such high spatial resolution are massive, contain complex information, and are therefore difficult to analyze automatically. In this article, we address the problem of individualization of size-varying and touching neurons in optical microscopy two-dimensional (2-D) images. Our approach is based on a series of processing steps that incorporate increasingly more information. (1) After a step of segmentation of neuron class using a Random Forest classifier, a novel min-max filter is used to enhance neurons' centroids and boundaries, enabling the use of region growing process based on a contour-based model to drive it to neuron boundary and achieve individualization of touching neurons. (2) Taking into account size-varying neurons, an adaptive multiscale procedure aiming at individualizing touching neurons is proposed. This protocol was evaluated in 17 major anatomical regions from three NeuN-stained macaque brain sections presenting diverse and comprehensive neuron densities. Qualitative and quantitative analyses demonstrate that the proposed method provides satisfactory results in most regions (e.g., caudate, cortex, subiculum, and putamen) and outperforms a baseline Watershed algorithm. Neuron counts obtained with our method show high correlation with an adapted stereology technique performed by two experts (respectively, 0.983 and 0.975 for the two experts). Neuron diameters obtained with our method ranged between 2 and 28.6 μm, matching values reported in the literature. Further works will aim to evaluate the impact of staining and interindividual variability on our protocol.
Because they bridge the genetic gap between rodents and humans, non-human primates (NHPs) play a major role in therapy development and evaluation for neurological disorders. However, translational ...research success from NHPs to patients requires an accurate phenotyping of the models. In patients, magnetic resonance imaging (MRI) combined with automated segmentation methods has offered the unique opportunity to assess in vivo brain morphological changes. Meanwhile, specific challenges caused by brain size and high field contrasts make existing algorithms hard to use routinely in NHPs. To tackle this issue, we propose a complete pipeline, Primatologist, for multi-region segmentation. Tissue segmentation is based on a modular statistical model that includes random field regularization, bias correction and denoising and is optimized by expectation-maximization. To deal with the broad variety of structures with different relaxing times at 7 T, images are segmented into 17 anatomical classes, including subcortical regions. Pre-processing steps insure a good initialization of the parameters and thus the robustness of the pipeline. It is validated on 10 T2-weighted MRIs of healthy macaque brains. Classification scores are compared with those of a non-linear atlas registration, and the impact of each module on classification scores is thoroughly evaluated.
•A segmentation pipeline is proposed to the non-human primate neuroimaging community.•It allows automatic segmentation of Macaque brain MRIs into 17 anatomical classes.•It relies on a generative model of intensity and a 3D digital atlas.•We showed that Primatologist performs better than a conventional atlas registration.
We present open-source tools for three-dimensional (3D) analysis of photographs of dissected slices of human brains, which are routinely acquired in brain banks but seldom used for quantitative ...analysis. Our tools can: (1) 3D reconstruct a volume from the photographs and, optionally, a surface scan; and (2) produce a high-resolution 3D segmentation into 11 brain regions per hemisphere (22 in total), independently of the slice thickness. Our tools can be used as a substitute for ex vivo magnetic resonance imaging (MRI), which requires access to an MRI scanner, ex vivo scanning expertise, and considerable financial resources. We tested our tools on synthetic and real data from two NIH Alzheimer’s Disease Research Centers. The results show that our methodology yields accurate 3D reconstructions, segmentations, and volumetric measurements that are highly correlated to those from MRI. Our method also detects expected differences between post mortem confirmed Alzheimer’s disease cases and controls. The tools are available in our widespread neuroimaging suite ‘FreeSurfer’ ( https://surfer.nmr.mgh.harvard.edu/fswiki/PhotoTools ).
Every year, thousands of human brains are donated to science. These brains are used to study normal aging, as well as neurological diseases like Alzheimer’s or Parkinson’s. Donated brains usually go to ‘brain banks’, institutions where the brains are dissected to extract tissues relevant to different diseases. During this process, it is routine to take photographs of brain slices for archiving purposes. Often, studies of dead brains rely on qualitative observations, such as ‘the hippocampus displays some atrophy’, rather than concrete ‘numerical’ measurements. This is because the gold standard to take three-dimensional measurements of the brain is magnetic resonance imaging (MRI), which is an expensive technique that requires high expertise – especially with dead brains. The lack of quantitative data means it is not always straightforward to study certain conditions. To bridge this gap, Gazula et al. have developed an openly available software that can build three-dimensional reconstructions of dead brains based on photographs of brain slices. The software can also use machine learning methods to automatically extract different brain regions from the three-dimensional reconstructions and measure their size. These data can be used to take precise quantitative measurements that can be used to better describe how different conditions lead to changes in the brain, such as atrophy (reduced volume of one or more brain regions). The researchers assessed the accuracy of the method in two ways. First, they digitally sliced MRI-scanned brains and used the software to compute the sizes of different structures based on these synthetic data, comparing the results to the known sizes. Second, they used brains for which both MRI data and dissection photographs existed and compared the measurements taken by the software to the measurements obtained with MRI images. Gazula et al. show that, as long as the photographs satisfy some basic conditions, they can provide good estimates of the sizes of many brain structures. The tools developed by Gazula et al. are publicly available as part of FreeSurfer, a widespread neuroimaging software that can be used by any researcher working at a brain bank. This will allow brain banks to obtain accurate measurements of dead brains, allowing them to cheaply perform quantitative studies of brain structures, which could lead to new findings relating to neurodegenerative diseases.
Alteration of brain aerobic glycolysis is often observed early in the course of Alzheimer’s disease (AD). Whether and how such metabolic dysregulation contributes to both synaptic plasticity and ...behavioral deficits in AD is not known. Here, we show that the astrocytic l-serine biosynthesis pathway, which branches from glycolysis, is impaired in young AD mice and in AD patients. l-serine is the precursor of d-serine, a co-agonist of synaptic NMDA receptors (NMDARs) required for synaptic plasticity. Accordingly, AD mice display a lower occupancy of the NMDAR co-agonist site as well as synaptic and behavioral deficits. Similar deficits are observed following inactivation of the l-serine synthetic pathway in hippocampal astrocytes, supporting the key role of astrocytic l-serine. Supplementation with l-serine in the diet prevents both synaptic and behavioral deficits in AD mice. Our findings reveal that astrocytic glycolysis controls cognitive functions and suggest oral l-serine as a ready-to-use therapy for AD.
Display omitted
•Astrocytes have impaired glycolytic flux in a mouse model of Alzheimer’s disease•Consequently, astrocytes produce less glycolysis-derived l-serine•Low NMDAR occupancy by d-serine leads to impairment of synaptic plasticity and memory•Dietary supplementation of l-serine restores synaptic plasticity and memory
Le Douce et al. show that glycolysis is impaired in astrocytes in the early stages of disease in a mouse model of Alzheimer’s. This leads to the reduction of both l- and d-serine synthesis and to the alteration of synaptic plasticity and memory. Dietary supplementation with l-serine restores both deficits, suggesting it to be a potential therapy.
•SynthSR turns clinical scans of different resolution and contrast into 1 mm MPRAGEs.•It relies on a CNN trained on fake images synthesized on the fly at every minibatch.•It can be retrained for any ...combination of resolutions / contrasts without new data.•It enables segmentation, registration, etc with existing software (e.g. FreeSurfer) Code is open source.
Display omitted
Most existing algorithms for automatic 3D morphometry of human brain MRI scans are designed for data with near-isotropic voxels at approximately 1 mm resolution, and frequently have contrast constraints as well-typically requiring T1-weighted images (e.g., MP-RAGE scans). This limitation prevents the analysis of millions of MRI scans acquired with large inter-slice spacing in clinical settings every year. In turn, the inability to quantitatively analyze these scans hinders the adoption of quantitative neuro imaging in healthcare, and also precludes research studies that could attain huge sample sizes and hence greatly improve our understanding of the human brain. Recent advances in convolutional neural networks (CNNs) are producing outstanding results in super-resolution and contrast synthesis of MRI. However, these approaches are very sensitive to the specific combination of contrast, resolution and orientation of the input images, and thus do not generalize to diverse clinical acquisition protocols – even within sites. In this article, we present SynthSR, a method to train a CNN that receives one or more scans with spaced slices, acquired with different contrast, resolution and orientation, and produces an isotropic scan of canonical contrast (typically a 1 mm MP-RAGE). The presented method does not require any preprocessing, beyond rigid coregistration of the input scans. Crucially, SynthSR trains on synthetic input images generated from 3D segmentations, and can thus be used to train CNNs for any combination of contrasts, resolutions and orientations without high-resolution real images of the input contrasts. We test the images generated with SynthSR in an array of common downstream analyses, and show that they can be reliably used for subcortical segmentation and volumetry, image registration (e.g., for tensor-based morphometry), and, if some image quality requirements are met, even cortical thickness morphometry. The source code is publicly available at https://github.com/BBillot/SynthSR.
•A framework for automatically learning shape and appearance models without manual annotations.•Designed to run within a distributed privacy preserving framework.•When used as a pattern recognition ...approach, can give competitive classification accuracies for MNIST - particularly for small numbers of training examples.•Can handle missing data in the images.•Tested the model with 1900 brain scans and found that its latent variables can be used as features for pattern recognition.
Display omitted
This paper presents a framework for automatically learning shape and appearance models for medical (and certain other) images. The algorithm was developed with the aim of eventually enabling distributed privacy-preserving analysis of brain image data, such that shared information (shape and appearance basis functions) may be passed across sites, whereas latent variables that encode individual images remain secure within each site. These latent variables are proposed as features for privacy-preserving data mining applications.
The approach is demonstrated qualitatively on the KDEF dataset of 2D face images, showing that it can align images that traditionally require shape and appearance models trained using manually annotated data (manually defined landmarks etc.). It is applied to the MNIST dataset of handwritten digits to show its potential for machine learning applications, particularly when training data is limited. The model is able to handle “missing data”, which allows it to be cross-validated according to how well it can predict left-out voxels. The suitability of the derived features for classifying individuals into patient groups was assessed by applying it to a dataset of over 1900 segmented T1-weighted MR images, which included images from the COBRE and ABIDE datasets.
Factorisation-Based Image Labelling Yan, Yu; Balbastre, Yaël; Brudfors, Mikael ...
Frontiers in neuroscience,
01/2022, Volume:
15
Journal Article
Peer reviewed
Open access
Segmentation of brain magnetic resonance images (MRI) into anatomical regions is a useful task in neuroimaging. Manual annotation is time consuming and expensive, so having a fully automated and ...general purpose brain segmentation algorithm is highly desirable. To this end, we propose a patched-based labell propagation approach based on a generative model with latent variables. Once trained, our Factorisation-based Image Labelling (FIL) model is able to label target images with a variety of image contrasts. We compare the effectiveness of our proposed model against the state-of-the-art using data from the
. As our approach is intended to be general purpose, we also assess how well it can handle domain shift by labelling images of the same subjects acquired with different MR contrasts.