. Patient motion artifacts present a prevalent challenge to image quality in interventional cone-beam CT (CBCT). We propose a novel reference-free similarity metric (DL-VIF) that leverages the ...capability of deep convolutional neural networks (CNN) to learn features associated with motion artifacts within realistic anatomical features. DL-VIF aims to address shortcomings of conventional metrics of motion-induced image quality degradation that favor characteristics associated with motion-free images, such as sharpness or piecewise constancy, but lack any awareness of the underlying anatomy, potentially promoting images depicting unrealistic image content. DL-VIF was integrated in an autofocus motion compensation framework to test its performance for motion estimation in interventional CBCT.
. DL-VIF is a reference-free surrogate for the previously reported visual image fidelity (VIF) metric, computed against a motion-free reference, generated using a CNN trained using simulated motion-corrupted and motion-free CBCT data. Relatively shallow (2-ResBlock) and deep (3-Resblock) CNN architectures were trained and tested to assess sensitivity to motion artifacts and generalizability to unseen anatomy and motion patterns. DL-VIF was integrated into an autofocus framework for rigid motion compensation in head/brain CBCT and assessed in simulation and cadaver studies in comparison to a conventional gradient entropy metric.
. The 2-ResBlock architecture better reflected motion severity and extrapolated to unseen data, whereas 3-ResBlock was found more susceptible to overfitting, limiting its generalizability to unseen scenarios. DL-VIF outperformed gradient entropy in simulation studies yielding average multi-resolution structural similarity index (SSIM) improvement over uncompensated image of 0.068 and 0.034, respectively, referenced to motion-free images. DL-VIF was also more robust in motion compensation, evidenced by reduced variance in SSIM for various motion patterns (
= 0.008 versus
= 0.019). Similarly, in cadaver studies, DL-VIF demonstrated superior motion compensation compared to gradient entropy (an average SSIM improvement of 0.043 (5%) versus little improvement and even degradation in SSIM, respectively) and visually improved image quality even in severely motion-corrupted images.Conclusion: The studies demonstrated the feasibility of building reference-free similarity metrics for quantification of motion-induced image quality degradation and distortion of anatomical structures in CBCT. DL-VIF provides a reliable surrogate for motion severity, penalizes unrealistic distortions, and presents a valuable new objective function for autofocus motion compensation in CBCT.
Purpose:
To allow for a purely image-based motion estimation and compensation in weight-bearing cone-beam computed tomography of the knee joint.
Methods:
Weight-bearing imaging of the knee joint in a ...standing position poses additional requirements for the image reconstruction algorithm. In contrast to supine scans, patient motion needs to be estimated and compensated. The authors propose a method that is based on 2D/3D registration of left and right femur and tibia segmented from a prior, motion-free reconstruction acquired in supine position. Each segmented bone is first roughly aligned to the motion-corrupted reconstruction of a scan in standing or squatting position. Subsequently, a rigid 2D/3D registration is performed for each bone to each of K projection images, estimating 6 × 4 × K motion parameters. The motion of individual bones is combined into global motion fields using thin-plate-spline extrapolation. These can be incorporated into a motion-compensated reconstruction in the backprojection step. The authors performed visual and quantitative comparisons between a state-of-the-art marker-based (MB) method and two variants of the proposed method using gradient correlation (GC) and normalized gradient information (NGI) as similarity measure for the 2D/3D registration.
Results:
The authors evaluated their method on four acquisitions under different squatting positions of the same patient. All methods showed substantial improvement in image quality compared to the uncorrected reconstructions. Compared to NGI and MB, the GC method showed increased streaking artifacts due to misregistrations in lateral projection images. NGI and MB showed comparable image quality at the bone regions. Because the markers are attached to the skin, the MB method performed better at the surface of the legs where the authors observed slight streaking of the NGI and GC methods. For a quantitative evaluation, the authors computed the universal quality index (UQI) for all bone regions with respect to the motion-free reconstruction. The authors quantitative evaluation over regions around the bones yielded a mean UQI of 18.4 for no correction, 53.3 and 56.1 for the proposed method using GC and NGI, respectively, and 53.7 for the MB reference approach. In contrast to the authors registration-based corrections, the MB reference method caused slight nonrigid deformations at bone outlines when compared to a motion-free reference scan.
Conclusions:
The authors showed that their method based on the NGI similarity measure yields reconstruction quality close to the MB reference method. In contrast to the MB method, the proposed method does not require any preparation prior to the examination which will improve the clinical workflow and patient comfort. Further, the authors found that the MB method causes small, nonrigid deformations at the bone outline which indicates that markers may not accurately reflect the internal motion close to the knee joint. Therefore, the authors believe that the proposed method is a promising alternative to MB motion management.
In cone-beam CT, involuntary patient motion and inaccurate or irreproducible scanner motion substantially degrades image quality. To avoid artifacts this motion needs to be estimated and compensated ...during image reconstruction. In previous work we showed that Fourier consistency conditions (FCC) can be used in fan-beam CT to estimate motion in the sinogram domain. This work extends the FCC to 3D cone-beam CT. We derive an efficient cost function to compensate for 3D motion using 2D detector translations. The extended FCC method have been tested with five translational motion patterns, using a challenging numerical phantom. We evaluated the root-mean-square-error and the structural-similarity-index between motion corrected and motion-free reconstructions. Additionally, we computed the mean-absolute-difference (MAD) between the estimated and the ground-truth motion. The practical applicability of the method is demonstrated by application to respiratory motion estimation in rotational angiography, but also to motion correction for weight-bearing imaging of knees. Where the latter makes use of a specifically modified FCC version which is robust to axial truncation. The results show a great reduction of motion artifacts. Accurate estimation results were achieved with a maximum MAD value of 708 μm and 1184 μm for motion along the vertical and horizontal detector direction, respectively. The image quality of reconstructions obtained with the proposed method is close to that of motion corrected reconstructions based on the ground-truth motion. Simulations using noise-free and noisy data demonstrate that FCC are robust to noise. Even high-frequency motion was accurately estimated leading to a considerable reduction of streaking artifacts. The method is purely image-based and therefore independent of any auxiliary data.
Accurate and consistent mental interpretation of fluoroscopy to determine the position and orientation of acetabular bone fragments in 3D space is difficult. We propose a computer assisted approach ...that uses a single fluoroscopic view and quickly reports the pose of an acetabular fragment without any user input or initialization. Intraoperatively, but prior to any osteotomies, two constellations of metallic ball-bearings (BBs) are injected into the wing of a patient's ilium and lateral superior pubic ramus. One constellation is located on the expected acetabular fragment, and the other is located on the remaining, larger, pelvis fragment. The 3D locations of each BB are reconstructed using three fluoroscopic views and 2D/3D registrations to a preoperative CT scan of the pelvis. The relative pose of the fragment is established by estimating the movement of the two BB constellations using a single fluoroscopic view taken after osteotomy and fragment relocation. BB detection and inter-view correspondences are automatically computed throughout the processing pipeline. The proposed method was evaluated on a multitude of fluoroscopic images collected from six cadaveric surgeries performed bilaterally on three specimens. Mean fragment rotation error was 2.4 ± 1.0 degrees, mean translation error was 2.1 ± 0.6 mm, and mean 3D lateral center edge angle error was 1.0 ± 0.5 degrees. The average runtime of the single-view pose estimation was 0.7 ± 0.2 s. The proposed method demonstrates accuracy similar to other state of the art systems which require optical tracking systems or multiple-view 2D/3D registrations with manual input. The errors reported on fragment poses and lateral center edge angles are within the margins required for accurate intraoperative evaluation of femoral head coverage.
In radiation therapy, tumor tracking allows to adjust the beam such that it follows the respiration-induced tumor motion. However, most clinical approaches rely on implanted fiducial markers to ...locate the tumor and, thus, only provide sparse information. Motion models have been investigated to estimate dense internal displacement fields from an external surrogate signal, such as range imaging. With increasing surrogate complexity, we propose a respiratory motion estimation framework based on kernel ridge regression to cope with high-dimensional domains. This approach was validated on five patient datasets, consisting of a planning 4DCT and a follow-up 4DCT for each patient. Mean residual error was at best 2.73 ± 0.25 mm, but varied greatly between patients.
Brain tissue perfusion assessment without using a CT system Subramanian, S.; Lee, D.; Faria, A. V. ...
2023 IEEE Nuclear Science Symposium, Medical Imaging Conference and International Symposium on Room-Temperature Semiconductor Detectors (NSS MIC RTSD),
2023-Nov.-4
Conference Proceeding
Real-time perfusion imaging is highly desirable for effectively using endovascular thrombectomy in treating ischemic stroke; the current standard of care is hindered by the lack of intra-operative ...perfusion imaging. We propose an Intra-intervention perfusion with no C-arm rotation (IPEN)-based method using X-ray angiographic images in the interventional suite. This approach aims to provide a volumetric brain perfusion assessment. IPEN is built on tomographic theory; wherein if an object consists of homogenous regions-of-interest (ROIs) with known shapes then N x-ray beams in a single projection are sufficient to estimate values of the N ROIs. IPEN for brain perfusion assessment aims to estimate the dynamic time-enhancement curve of each homogeneous ROI in the brain. The proposed algorithm was evaluated using XCAT phantoms updated for dynamic CT perfusion. Eight ischemic stroke cases were simulated. Using intra-arterial contrast-enhanced bi-plane XA images for each case, IPEN was performed to estimate the TEC values. Quantitatively, the estimated TECs had an excellent agreement with the true TECs. The normalized weighted root mean square error was 14.5% ± 1.7%, and the linear correlation coefficient values (R) ranged 0.85-0.95 (P < 0.01) for 8 cases.
Non-recurrent intra-scan motion, such as respiration, corrupts rotational coronary angiography acquisitions and inhibits uncompensated 3D reconstruction. Therefore, state-of-the-art algorithms that ...rely on 3D/2D registration of initial reconstructions to the projection data are unfavorable as prior models of sufficient quality cannot be obtained. To overcome this limitation, we propose a compensation method that optimizes a task-based autofocus measure using graphical model-based optimization.