Research on undersampled magnetic resonance image (MRI) reconstruction can increase the speed of MRI imaging and reduce patient suffering. In this paper, an undersampled MRI reconstruction method ...based on Generative Adversarial Networks with the Self-Attention mechanism and the Relative Average discriminator (SARA-GAN) is proposed. In our SARA-GAN, the relative average discriminator theory is applied to make full use of the prior knowledge, in which half of the input data of the discriminator is true and half is fake. At the same time, a self-attention mechanism is incorporated into the high-layer of the generator to build long-range dependence of the image, which can overcome the problem of limited convolution kernel size. Besides, spectral normalization is employed to stabilize the training process. Compared with three widely used GAN-based MRI reconstruction methods, i.e., DAGAN, DAWGAN, and DAWGAN-GP, the proposed method can obtain a higher peak signal-to-noise ratio (PSNR) and structural similarity index measure(SSIM), and the details of the reconstructed image are more abundant and more realistic for further clinical scrutinization and diagnostic tasks.
The large-scale sharing of task-based functional neuroimaging data has the potential to allow novel insights into the organization of mental function in the brain, but the field of neuroimaging has ...lagged behind other areas of bioscience in the development of data sharing resources. This paper describes the OpenFMRI project (accessible online at http://www.openfmri.org), which aims to provide the neuroimaging community with a resource to support open sharing of task-based fMRI studies. We describe the motivation behind the project, focusing particularly on how this project addresses some of the well-known challenges to sharing of task-based fMRI data. Results from a preliminary analysis of the current database are presented, which demonstrate the ability to classify between task contrasts with high generalization accuracy across subjects, and the ability to identify individual subjects from their activation maps with moderately high accuracy. Clustering analyses show that the similarity relations between statistical maps have a somewhat orderly relation to the mental functions engaged by the relevant tasks. These results highlight the potential of the project to support large-scale multivariate analyses of the relation between mental processes and brain function.
The hippocampus and hippocampal subfields have been found to be diversely affected in Alzheimer's Disease (AD) and early stages of Alzheimer's disease by neuroimaging studies. However, our knowledge ...is still lacking about the trajectories of the hippocampus and hippocampal subfields atrophy with the progression of Alzheimer's disease.
To identify which subfields of the hippocampus differ in the trajectories of Alzheimer's disease by magnetic resonance imaging (MRI) and to determine whether individual differences on memory could be explained by structural volumes of hippocampal subfields.
Four groups of participants including 41 AD patients, 43 amnestic mild cognitive impairment (aMCI) patients, 35 subjective cognitive decline (SCD) patients and 42 normal controls (NC) received their structural MRI brain scans. Structural MR images were processed by the FreeSurfer 6.0 image analysis suite to extract the hippocampus and its subfields. Furthermore, we investigated relationships between hippocampal subfield volumes and memory test variables (AVLT-immediate recall, AVLT-delayed recall, AVLT-recognition) and the regression model analyses were controlled for age, gender, education and eTIV.
CA1, subiculum, presubiculum, molecular layer and fimbria showed the trend toward significant volume reduction among four groups with the progression of Alzheimer's disease. Volume of left subiculum was most strongly and actively correlated with performance across AVLT measures.
The trend changes in the hippocampus subfields and further illustrates that SCD is the preclinical stage of AD earlier than aMCI. Future studies should aim to associate the atrophy of the hippocampal subfields in SCD with possible conversion to aMCI or AD with longitudinal design.
We present The Virtual Brain (TVB), a neuroinformatics platform for full brain network simulations using biologically realistic connectivity. This simulation environment enables the model-based ...inference of neurophysiological mechanisms across different brain scales that underlie the generation of macroscopic neuroimaging signals including functional MRI (fMRI), EEG and MEG. Researchers from different backgrounds can benefit from an integrative software platform including a supporting framework for data management (generation, organization, storage, integration and sharing) and a simulation core written in Python. TVB allows the reproduction and evaluation of personalized configurations of the brain by using individual subject data. This personalization facilitates an exploration of the consequences of pathological changes in the system, permitting to investigate potential ways to counteract such unfavorable processes. The architecture of TVB supports interaction with MATLAB packages, for example, the well known Brain Connectivity Toolbox. TVB can be used in a client-server configuration, such that it can be remotely accessed through the Internet thanks to its web-based HTML5, JS, and WebGL graphical user interface. TVB is also accessible as a standalone cross-platform Python library and application, and users can interact with the scientific core through the scripting interface IDLE, enabling easy modeling, development and debugging of the scientific kernel. This second interface makes TVB extensible by combining it with other libraries and modules developed by the Python scientific community. In this article, we describe the theoretical background and foundations that led to the development of TVB, the architecture and features of its major software components as well as potential neuroscience applications.
Machine learning techniques have become increasingly popular in the field of resting state fMRI (functional magnetic resonance imaging) network based classification. However, the application of ...convolutional networks has been proposed only very recently and has remained largely unexplored. In this paper we describe a convolutional neural network architecture for functional connectome classification called connectome-convolutional neural network (CCNN). Our results on simulated datasets and a publicly available dataset for amnestic mild cognitive impairment classification demonstrate that our CCNN model can efficiently distinguish between subject groups. We also show that the connectome-convolutional network is capable to combine information from diverse functional connectivity metrics and that models using a combination of different connectivity descriptors are able to outperform classifiers using only one metric. From this flexibility follows that our proposed CCNN model can be easily adapted to a wide range of connectome based classification or regression tasks, by varying which connectivity descriptor combinations are used to train the network.
We have developed a method for automated probabilistic reconstruction of a set of major white-matter pathways from diffusion-weighted MR images. Our method is called TRACULA (TRActs Constrained by ...UnderLying Anatomy) and utilizes prior information on the anatomy of the pathways from a set of training subjects. By incorporating this prior knowledge in the reconstruction procedure, our method obviates the need for manual interaction with the tract solutions at a later stage and thus facilitates the application of tractography to large studies. In this paper we illustrate the application of the method on data from a schizophrenia study and investigate whether the inclusion of both patients and healthy subjects in the training set affects our ability to reconstruct the pathways reliably. We show that, since our method does not constrain the exact spatial location or shape of the pathways but only their trajectory relative to the surrounding anatomical structures, a set a of healthy training subjects can be used to reconstruct the pathways accurately in patients as well as in controls.
Correction of echo planar imaging (EPI)-induced distortions (called "unwarping") improves anatomical fidelity for diffusion magnetic resonance imaging (MRI) and functional imaging investigations. ...Commonly used unwarping methods require the acquisition of supplementary images during the scanning session. Alternatively, distortions can be corrected by nonlinear registration to a non-EPI acquired structural image. In this study, we compared reliability using two methods of unwarping: (1) nonlinear registration to a structural image using symmetric normalization (SyN) implemented in Advanced Normalization Tools (ANTs); and (2) unwarping using an acquired field map. We performed this comparison in two different test-retest data sets acquired at differing sites (
= 39 and
= 32). In both data sets, nonlinear registration provided higher test-retest reliability of the output fractional anisotropy (FA) maps than field map-based unwarping, even when accounting for the effect of interpolation on the smoothness of the images. In general, field map-based unwarping was preferable if and only if the field maps were acquired optimally.
Resting state networks (RSNs) in the human brain were recently detected using high-density electroencephalography (hdEEG). This was done by using an advanced analysis workflow to estimate neural ...signals in the cortex and to assess functional connectivity (FC) between distant cortical regions. FC analyses were conducted either using temporal (tICA) or spatial independent component analysis (sICA). Notably, EEG-RSNs obtained with sICA were very similar to RSNs retrieved with sICA from functional magnetic resonance imaging data. It still remains to be clarified, however, what technological aspects of hdEEG acquisition and analysis primarily influence this correspondence. Here we examined to what extent the detection of EEG-RSN maps by sICA depends on the electrode density, the accuracy of the head model, and the source localization algorithm employed. Our analyses revealed that the collection of EEG data using a high-density montage is crucial for RSN detection by sICA, but also the use of appropriate methods for head modeling and source localization have a substantial effect on RSN reconstruction. Overall, our results confirm the potential of hdEEG for mapping the functional architecture of the human brain, and highlight at the same time the interplay between acquisition technology and innovative solutions in data analysis.
In the last decade, diffusion MRI (dMRI) studies of the human and animal brain have been used to investigate a multitude of pathologies and drug-related effects in neuroscience research. Study after ...study identifies white matter (WM) degeneration as a crucial biomarker for all these diseases. The tool of choice for studying WM is dMRI. However, dMRI has inherently low signal-to-noise ratio and its acquisition requires a relatively long scan time; in fact, the high loads required occasionally stress scanner hardware past the point of physical failure. As a result, many types of artifacts implicate the quality of diffusion imagery. Using these complex scans containing artifacts without quality control (QC) can result in considerable error and bias in the subsequent analysis, negatively affecting the results of research studies using them. However, dMRI QC remains an under-recognized issue in the dMRI community as there are no user-friendly tools commonly available to comprehensively address the issue of dMRI QC. As a result, current dMRI studies often perform a poor job at dMRI QC. Thorough QC of dMRI will reduce measurement noise and improve reproducibility, and sensitivity in neuroimaging studies; this will allow researchers to more fully exploit the power of the dMRI technique and will ultimately advance neuroscience. Therefore, in this manuscript, we present our open-source software, DTIPrep, as a unified, user friendly platform for thorough QC of dMRI data. These include artifacts caused by eddy-currents, head motion, bed vibration and pulsation, venetian blind artifacts, as well as slice-wise and gradient-wise intensity inconsistencies. This paper summarizes a basic set of features of DTIPrep described earlier and focuses on newly added capabilities related to directional artifacts and bias analysis.
The Human Connectome Project (HCP) is a major endeavor that will acquire and analyze connectivity data plus other neuroimaging, behavioral, and genetic data from 1,200 healthy adults. It will serve ...as a key resource for the neuroscience research community, enabling discoveries of how the brain is wired and how it functions in different individuals. To fulfill its potential, the HCP consortium is developing an informatics platform that will handle: (1) storage of primary and processed data, (2) systematic processing and analysis of the data, (3) open-access data-sharing, and (4) mining and exploration of the data. This informatics platform will include two primary components. ConnectomeDB will provide database services for storing and distributing the data, as well as data analysis pipelines. Connectome Workbench will provide visualization and exploration capabilities. The platform will be based on standard data formats and provide an open set of application programming interfaces (APIs) that will facilitate broad utilization of the data and integration of HCP services into a variety of external applications. Primary and processed data generated by the HCP will be openly shared with the scientific community, and the informatics platform will be available under an open source license. This paper describes the HCP informatics platform as currently envisioned and places it into the context of the overall HCP vision and agenda.