Abstract Time resolved phase-contrast magnetic resonance imaging 4D-PCMR (also called 4D Flow MRI) data while capable of non-invasively measuring blood velocities, can be affected by acquisition ...noise, flow artifacts, and resolution limits. In this paper, we present a novel method for merging 4D Flow MRI with computational fluid dynamics (CFD) to address these limitations and to reconstruct de-noised, divergence-free high-resolution flow-fields. Proper orthogonal decomposition (POD) is used to construct the orthonormal basis of the local sampling of the space of all possible solutions to the flow equations both at the low-resolution level of the 4D Flow MRI grid and the high-level resolution of the CFD mesh. Low-resolution, de-noised flow is obtained by projecting in vivo 4D Flow MRI data onto the low-resolution basis vectors. Ridge regression is then used to reconstruct high-resolution de-noised divergence-free solution. The effects of 4D Flow MRI grid resolution, and noise levels on the resulting velocity fields are further investigated. A numerical phantom of the flow through a cerebral aneurysm was used to compare the results obtained using the POD method with those obtained with the state-of-the-art de-noising methods. At the 4D Flow MRI grid resolution, the POD method was shown to preserve the small flow structures better than the other methods, while eliminating noise. Furthermore, the method was shown to successfully reconstruct details at the CFD mesh resolution not discernible at the 4D Flow MRI grid resolution. This method will improve the accuracy of the clinically relevant flow-derived parameters, such as pressure gradients and wall shear stresses, computed from in vivo 4D Flow MRI data.
•Novel method based on Physics Informed Deep Learning for super-resolution and denoising of 4D-Flow MRI.•Method works directly off of 4D-Flow MRI data and generates Computational Fluid Dyamics (CFD) ...simulation quality results without the drawbacks of CFD simulation.•Method does not require specification of vascular geometry and boundary conditions and can work on arbitrary regions of interest.•Automatic differentiation is used to compute gradients of field quantities. Therefore, there is no truncation error as in CFD.
Background and Objective: Time resolved three-dimensional phase contrast magnetic resonance imaging (4D-Flow MRI) has been used to non-invasively measure blood velocities in the human vascular system. However, issues such as low spatio-temporal resolution, acquisition noise, velocity aliasing, and phase-offset artifacts have hampered its clinical application. In this research, we developed a purely data-driven method for super-resolution and denoising of 4D-Flow MRI.
Methods: The flow velocities, pressure, and the MRI image magnitude are modeled as a patient-specific deep neural net (DNN). For training, 4D-Flow MRI images in the complex Cartesian space are used to impose data-fidelity. Physics of fluid flow is imposed through regularization. Creative loss function terms have been introduced to handle noise and super-resolution. The trained patient-specific DNN can be sampled to generate noise-free high-resolution flow images. The proposed method has been implemented using the TensorFlow DNN library and tested on numerical phantoms and validated in-vitro using high-resolution particle image velocitmetry (PIV) and 4D-Flow MRI experiments on transparent models subjected to pulsatile flow conditions.
Results: In case of numerical phantoms, we were able to increase spatial resolution by a factor of 100 and temporal resolution by a factor of 5 compared to the simulated 4D-Flow MRI. There is an order of magnitude reduction of velocity normalized root mean square error (vNRMSE). In case of the in-vitro validation tests with PIV as reference, there is similar improvement in spatio-temporal resolution. Although the vNRMSE is reduced by 50%, the method is unable to negate a systematic bias with respect to the reference PIV that is introduced by the 4D-Flow MRI measurement.
Conclusions: This work has demonstrated the feasibility of using the readily available machinery of deep learning to enhance 4D-Flow MRI using a purely data-driven method. Unlike current state-of-the-art methods, the proposed method is agnostic to geometry and boundary conditions and therefore eliminates the need for tedious tasks such as accurate image segmentation for geometry, image registration, and estimation of boundary flow conditions. Arbitrary regions of interest can be selected for processing. This work will lead to user-friendly analysis tools that will enable quantitative hemodynamic analysis of vascular diseases in a clinical setting.
•A brief overview is given about different aspects of retinal Optical Coherence Tomography (OCT) image analysis.•The problem of involuntary eye motion artifacts during OCT acquisition is described in ...details.•A comprehensive literature review of the hardware/software based techniques for eye motion artifact reduction is provided.•Detailed discussions regarding the effectiveness of the covered methods and directions for future research in this field are presented.
Display omitted
In this paper, we review state-of-the-art techniques to correct eye motion artifacts in Optical Coherence Tomography (OCT) imaging. The methods for eye motion artifact reduction can be categorized into two major classes: (1) hardware-based techniques and (2) software-based techniques. In the first class, additional hardware is mounted onto the OCT scanner to gather information about the eye motion patterns during OCT data acquisition. This information is later processed and applied to the OCT data for creating an anatomically correct representation of the retina, either in an offline or online manner. In software based techniques, the motion patterns are approximated either by comparing the acquired data to a reference image, or by considering some prior assumptions about the nature of the eye motion. Careful investigations done on the most common methods in the field provides invaluable insight regarding future directions of the research in this area. The challenge in hardware-based techniques lies in the implementation aspects of particular devices. However, the results of these techniques are superior to those obtained from software-based techniques because they are capable of capturing secondary data related to eye motion during OCT acquisition. Software-based techniques on the other hand, achieve moderate success and their performance is highly dependent on the quality of the OCT data in terms of the amount of motion artifacts contained in them. However, they are still relevant to the field since they are the sole class of techniques with the ability to be applied to legacy data acquired using systems that do not have extra hardware to track eye motion.
The Gillespie Stochastic Simulation Algorithm (GSSA) and its variants are cornerstone techniques to simulate reaction kinetics in situations where the concentration of the reactant is too low to ...allow deterministic techniques such as differential equations. The inherent limitations of the GSSA include the time required for executing a single run and the need for multiple runs for parameter sweep exercises due to the stochastic nature of the simulation. Even very efficient variants of GSSA are prohibitively expensive to compute and perform parameter sweeps. Here we present a novel variant of the exact GSSA that is amenable to acceleration by using graphics processing units (GPUs). We parallelize the execution of a single realization across threads in a warp (fine-grained parallelism). A warp is a collection of threads that are executed synchronously on a single multi-processor. Warps executing in parallel on different multi-processors (coarse-grained parallelism) simultaneously generate multiple trajectories. Novel data-structures and algorithms reduce memory traffic, which is the bottleneck in computing the GSSA. Our benchmarks show an 8×-120× performance gain over various state-of-the-art serial algorithms when simulating different types of models.
Multi-modal image registration is the primary step in integrating information stored in two or more images, which are captured using multiple imaging modalities. In addition to intensity variations ...and structural differences between images, they may have partial or full overlap, which adds an extra hurdle to the success of registration process. In this contribution, we propose a multi-modal to mono-modal transformation method that facilitates direct application of well-founded mono-modal registration methods in order to obtain accurate alignment of multi-modal images in both cases, with complete (full) and incomplete (partial) overlap. The proposed transformation facilitates recovering strong scales, rotations, and translations. We explain the method thoroughly and discuss the choice of parameters. For evaluation purposes, the effectiveness of the proposed method is examined and compared with widely used information theory-based techniques using simulated and clinical human brain images with full data. Using RIRE dataset, mean absolute error of 1.37, 1.00, and 1.41 mm are obtained for registering CT images with PD-, T1-, and T2-MRIs, respectively. In the end, we empirically investigate the efficacy of the proposed transformation in registering multi-modal partially overlapped images.
Scanning Electron Microscope (SEM) as one of the major research and industrial equipment for imaging of micro-scale samples and surfaces has gained extensive attention from its emerge. However, the ...acquired micrographs still remain two-dimensional (2D). In the current work a novel and highly accurate approach is proposed to recover the hidden third-dimension by use of multi-view image acquisition of the microscopic samples combined with pre/post-processing steps including sparse feature-based stereo rectification, nonlocal-based optical flow estimation for dense matching and finally depth estimation. Employing the proposed approach, three-dimensional (3D) reconstructions of highly complex microscopic samples were achieved to facilitate the interpretation of topology and geometry of surface/shape attributes of the samples. As a byproduct of the proposed approach, high-definition 3D printed models of the samples can be generated as a tangible means of physical understanding. Extensive comparisons with the state-of-the-art reveal the strength and superiority of the proposed method in uncovering the details of the highly complex microscopic samples.
•Has proven applicability in recovering the flow profile in intra-cranial aneurysms.•Results in lower error metrics compared to the state-of-the-art denoising methods.•Enables more accurate ...computation of flow derived patho-physiological parameters.
4D-Flow MRI has emerged as a powerful tool to non-invasively image blood velocity profiles in the human cardio-vascular system. However, it is plagued by issues such as velocity aliasing, phase offsets, acquisition noise, and low spatial and temporal resolution. In imaging small blood vessel malformations such as intra-cranial aneurysms, the spatial resolution of 4D-Flow is often inadequate to resolve fine flow features. In this paper, we address the problem of low spatial resolution and noise by combining 4D-Flow MRI and patient specific computational fluid dynamics using Least Absolute Shrinkage and Selection Operator. Extensive experiments using numerical phantoms of two actual intra-cranial aneurysms geometries show the applicability of the proposed method in recovering the flow profile. Comparisons with the state-of-the-art denoising methods for 4D-Flow show lower error metrics. This method can enable more accurate computation of flow derived patho-physiological parameters such as wall shear stresses, pressure gradients, and viscous dissipation.
In this paper, we describe a new brute force algorithm for building the k-Nearest Neighbor Graph (k-NNG). The k-NNG algorithm has many applications in areas such as machine learning, bio-informatics, ...and clustering analysis. While there are very efficient algorithms for data of low dimensions, for high dimensional data the brute force search is the best algorithm. There are two main parts to the algorithm: the first part is finding the distances between the input vectors, which may be formulated as a matrix multiplication problem; the second is the selection of the k-NNs for each of the query vectors. For the second part, we describe a novel graphics processing unit (GPU)-based multi-select algorithm based on quick sort. Our optimization makes clever use of warp voting functions available on the latest GPUs along with user-controlled cache. Benchmarks show significant improvement over state-of-the-art implementations of the k-NN search on GPUs.