Brain extraction or whole brain segmentation is an important first step in many of the neuroimage analysis pipelines. The accuracy and the robustness of brain extraction, therefore, are crucial for ...the accuracy of the entire brain analysis process. The state-of-the-art brain extraction techniques rely heavily on the accuracy of alignment or registration between brain atlases and query brain anatomy, and/or make assumptions about the image geometry, and therefore have limited success when these assumptions do not hold or image registration fails. With the aim of designing an accurate, learning-based, geometry-independent, and registration-free brain extraction tool, in this paper, we present a technique based on an auto-context convolutional neural network (CNN), in which intrinsic local and global image features are learned through 2-D patches of different window sizes. We consider two different architectures: 1) a voxelwise approach based on three parallel 2-D convolutional pathways for three different directions (axial, coronal, and sagittal) that implicitly learn 3-D image information without the need for computationally expensive 3-D convolutions and 2) a fully convolutional network based on the U-net architecture. Posterior probability maps generated by the networks are used iteratively as context information along with the original image patches to learn the local shape and connectedness of the brain to extract it from non-brain tissue. The brain extraction results we have obtained from our CNNs are superior to the recently reported results in the literature on two publicly available benchmark data sets, namely, LPBA40 and OASIS, in which we obtained the Dice overlap coefficients of 97.73% and 97.62%, respectively. Significant improvement was achieved via our auto-context algorithm. Furthermore, we evaluated the performance of our algorithm in the challenging problem of extracting arbitrarily oriented fetal brains in reconstructed fetal brain magnetic resonance imaging (MRI) data sets. In this application, our voxelwise auto-context CNN performed much better than the other methods (Dice coefficient: 95.97%), where the other methods performed poorly due to the non-standard orientation and geometry of the fetal brain in MRI. Through training, our method can provide accurate brain extraction in challenging applications. This, in turn, may reduce the problems associated with image registration in segmentation tasks.
With an aim to increase the capture range and accelerate the performance of state-of-the-art inter-subject and subject-to-template 3-D rigid registration, we propose deep learning-based methods that ...are trained to find the 3-D position of arbitrarily-oriented subjects or anatomy in a canonical space based on slices or volumes of medical images. For this, we propose regression convolutional neural networks (CNNs) that learn to predict the angle-axis representation of 3-D rotations and translations using image features. We use and compare mean square error and geodesic loss to train regression CNNs for 3-D pose estimation used in two different scenarios: slice-to-volume registration and volume-to-volume registration. As an exemplary application, we applied the proposed methods to register arbitrarily oriented reconstructed images of fetuses scanned in-utero at a wide gestational age range to a standard atlas space. Our results show that in such registration applications that are amendable to learning, the proposed deep learning methods with geodesic loss minimization achieved 3-D pose estimation with a wide capture range in real-time (<;100ms). We also tested the generalization capability of the trained CNNs on an expanded age range and on images of newborn subjects with similar and different MR image contrasts. We trained our models on T2-weighted fetal brain MRI scans and used them to predict the 3-D pose of newborn brains based on T1-weighted MRI scans. We showed that the trained models generalized well for the new domain when we performed image contrast transfer through a conditional generative adversarial network. This indicates that the domain of application of the trained deep regression CNNs can be further expanded to image modalities and contrasts other than those used in training. A combination of our proposed methods with accelerated optimization-based registration algorithms can dramatically enhance the performance of automatic imaging devices and image processing methods of the future.
Fully convolutional deep neural networks have been asserted to be fast and precise frameworks with great potential in image segmentation. One of the major challenges in training such networks raises ...when the data are unbalanced, which is common in many medical imaging applications, such as lesion segmentation, where lesion class voxels are often much lower in numbers than non-lesion voxels. A trained network with unbalanced data may make predictions with high precision and low recall, being severely biased toward the non-lesion class which is particularly undesired in most medical applications where false negatives are actually more important than false positives. Various methods have been proposed to address this problem, including two-step training, sample re-weighting, balanced sampling, and more recently, similarity loss functions and focal loss. In this paper, we fully trained convolutional deep neural networks using an asymmetric similarity loss function to mitigate the issue of data imbalance and achieve much better tradeoff between precision and recall. To this end, we developed a 3D fully convolutional densely connected network (FC-DenseNet) with large overlapping image patches as input and an asymmetric similarity loss layer based on Tversky index (using <inline-formula> <tex-math notation="LaTeX">F_\beta </tex-math></inline-formula> scores). We used large overlapping image patches as inputs for intrinsic and extrinsic data augmentation, a patch selection algorithm, and a patch prediction fusion strategy using B-spline weighted soft voting to account for the uncertainty of prediction in patch borders. We applied this method to multiple sclerosis (MS) lesion segmentation based on two different datasets of MSSEG 2016 and ISBI longitudinal MS lesion segmentation challenge, where we achieved average Dice similarity coefficients of 69.9% and 65.74%, respectively, achieving top performance in both the challenges. We compared the performance of our network trained with <inline-formula> <tex-math notation="LaTeX">F_\beta </tex-math></inline-formula> loss, focal loss, and generalized Dice loss functions. Through September 2018, our network trained with focal loss ranked first according to the ISBI challenge overall score and resulted in the lowest reported lesion false positive rate among all submitted methods. Our network trained with the asymmetric similarity loss led to the lowest surface distance and the best lesion true positive rate that is arguably the most important performance metric in a clinical decision support system for lesion detection. The asymmetric similarity loss function based on <inline-formula> <tex-math notation="LaTeX">F_\beta </tex-math></inline-formula> scores allows training networks that make a better balance between precision and recall in highly unbalanced image segmentation. We achieved superior performance in MS lesion segmentation using a patch-wise 3D FC-DenseNet with a patch prediction fusion strategy, trained with asymmetric similarity loss functions.
Diffusion weighted magnetic resonance imaging, or DWI, is one of the most promising tools for the analysis of neural microstructure and the structural connectome of the human brain. The application ...of DWI to map early development of the human connectome in-utero, however, is challenged by intermittent fetal and maternal motion that disrupts the spatial correspondence of data acquired in the relatively long DWI acquisitions. Fetuses move continuously during DWI scans. Reliable and accurate analysis of the fetal brain structural connectome requires careful compensation of motion effects and robust reconstruction to avoid introducing bias based on the degree of fetal motion. In this paper we introduce a novel robust algorithm to reconstruct in-vivo diffusion-tensor MRI (DTI) of the moving fetal brain and show its effect on structural connectivity analysis. The proposed algorithm involves multiple steps of image registration incorporating a dynamic registration-based motion tracking algorithm to restore the spatial correspondence of DWI data at the slice level and reconstruct DTI of the fetal brain in the standard (atlas) coordinate space. A weighted linear least squares approach is adapted to remove the effect of intra-slice motion and reconstruct DTI from motion-corrected data. The proposed algorithm was tested on data obtained from 21 healthy fetuses scanned in-utero at 22–38 weeks gestation. Significantly higher fractional anisotropy values in fiber-rich regions, and the analysis of whole-brain tractography and group structural connectivity, showed the efficacy of the proposed method compared to the analyses based on original data and previously proposed methods. The results of this study show that slice-level motion correction and robust reconstruction is necessary for reliable in-vivo structural connectivity analysis of the fetal brain. Connectivity analysis based on graph theoretic measures show high degree of modularity and clustering, and short average characteristic path lengths indicative of small-worldness property of the fetal brain network. These findings comply with previous findings in newborns and a recent study on fetuses. The proposed algorithm can provide valuable information from DWI of the fetal brain not available in the assessment of the original 2D slices and may be used to more reliably study the developing fetal brain connectome.
TLP: Towards three‐level loop parallelisation Mahjoub, Shabnam; Golsorkhtabaramiri, Mehdi; Salehi Amiri, Seyed Sadegh
IET computers & digital techniques,
September-November 2022, 2022-09-00, 2022-09-01, Volume:
16, Issue:
5-6
Journal Article
Peer reviewed
Open access
Due to the design of computer systems in the multi‐core and/or multi‐processor form, it is possible to use the maximum capacity of processors to run an application with the least time consumed ...through parallelisation. This is the responsibility of parallel compilers, which perform parallelisation in several steps by distributing iterations between different processors and executing them simultaneously to achieve lower runtime. The present paper focuses on the uniformisation of three‐level perfect nested loops as an important step in parallelisation and proposes a method called Towards Three‐Level Loop Parallelisation (TLP) that uses a combination of a Frog Leaping Algorithm and Fuzzy to achieve optimal results because in recent years, many algorithms have worked on volumetric data, that is, three‐dimensional spaces. Results of the implementation of the TLP algorithm in comparison with existing methods lead to a wide variety of optimal results at desired times, with minimum cone size resulting from the vectors. Besides, the maximum number of input dependence vectors is decomposed by this algorithm. These results can accelerate the process of generating parallel codes and facilitate their development for High‐Performance Computing purposes.
Fluid bed granulation is faced with a high level of complexity due to the simultaneous occurrence of agglomeration, breakage, and drying. These complexities should be thoroughly investigated through ...particle–particle, particle–droplet, and particle–fluid interactions to understand the process better. The present contribution focuses on the importance of drying and the associated challenges when modeling a granulation process. To do so, initially, we will present a summary of the numerical approaches, from micro-scale to macro-scale, used for the simulation of drying and agglomeration in fluid bed granulators. Depending on the modeled scale, each approach features several advantages and challenges. We classified the imposed challenges based on their contributions to the drying rate. Then, we critically scrutinized how these challenges have been addressed in the literature. Our review identifies some of the main challenges related to (i) the interaction of droplets with particles; (ii) the drying kinetics of granules and its dependence on agglomeration/breakage processes; as well as (iii) the determination of drying rates. Concerning the latter, specifically the surface area available for drying needs to be differentiated based on the state of the liquid in the granule: we propose to do this in the form of surface liquid, pore liquid, and the liquid bridging the primary particles.
The present study proposes a novel method based on evolutionary and fuzzy approaches for unifying two-level perfect nested loops. In this method, the Shuffled Frog Leaping Algorithm (SFLA) is used ...for achieving optimal answers, and simultaneously, three critical factors are applied as an input in determining basic dependence vectors. The use of fuzzy logic versus fixed coefficients for these three factors has led to the creation of optimal results with high variability and has solved the problem regarding the existence of the main vectors. In addition, the present algorithm has been proposed for many input data so that it can be used in parallel compilers automatically and with low complexity. After implementing and evaluating the proposed method, it was found that, compared to other existing methods, the results achieved were very close to optimal, in the least time, and with the lowest Dependence Cone Size (DCS) and highest number of input vectors.
Neuroimaging is crucial for assessing mass effect in brain-injured patients. Transport to an imaging suite, however, is challenging for critically ill patients. We evaluated the use of a low magnetic ...field, portable MRI (pMRI) for assessing midline shift (MLS). In this observational study, 0.064 T pMRI exams were performed on stroke patients admitted to the neuroscience intensive care unit at Yale New Haven Hospital. Dichotomous (present or absent) and continuous MLS measurements were obtained on pMRI exams and locally available and accessible standard-of-care imaging exams (CT or MRI). We evaluated the agreement between pMRI and standard-of-care measurements. Additionally, we assessed the relationship between pMRI-based MLS and functional outcome (modified Rankin Scale). A total of 102 patients were included in the final study (48 ischemic stroke; 54 intracranial hemorrhage). There was significant concordance between pMRI and standard-of-care measurements (dichotomous, κ = 0.87; continuous, ICC = 0.94). Low-field pMRI identified MLS with a sensitivity of 0.93 and specificity of 0.96. Moreover, pMRI MLS assessments predicted poor clinical outcome at discharge (dichotomous: adjusted OR 7.98, 95% CI 2.07-40.04, p = 0.005; continuous: adjusted OR 1.59, 95% CI 1.11-2.49, p = 0.021). Low-field pMRI may serve as a valuable bedside tool for detecting mass effect.
Brain computer interfaces (BCIs) offer individuals with disabilities an alternative channel of communication and control, hence they have been receiving increasing interest. BCIs can also be useful ...for healthy individuals in situations limiting their movement or where other computer interaction modalities need to be supplemented. Event-related and steady state visually evoked potentials (SSVEPs) are the top two brain signal types used in developing BCIs that allow the user to make a choice from a discrete set of options, including the selection of commands from a menu for a robot or computer to perform, as well as typing letters, symbols, or icons for communication. Popular BCI speller paradigms, such as the P300 Matrix Speller, RSVP Keyboard TM or SSVEP spellers in which the letters on the keyboard display flicker, are sensitive to the font, size and presentation speed. In addition, sensitivity to eye gaze control plays a significant role in usability of most of these keyboards. We present a code-VEP based BCI, utilized in a language model assisted keyboard application. Utilizing a cursor based selection method, stimuli and targets are separated. FlashType TM separates visual stimulation from alphabet presentation to achieve performance invariance under presentation variations. Therefore, FlashType TM can be used for all languages, including the ones containing symbols and icons. FlashType TM , contains a Static Keyboard, a row of Suggested Characters and a row of Predicted Words. FlashType TM , by default, uses only one EEG electrode and four stimuli. The system can operate using only one stimulus at a lower selection rate, useful for individuals with limited or no gaze control. This feature is to be explored in future. Replacing letters with text or icons representing commands would allow controlling a computer or robot. In this study, FlashType TM has been evaluated by three individuals performing 10 Mastery tasks. In depth experimentation, such as assessing the system with potential end users writing long passages of text, will be done in future.