We propose the n -dimensional scale invariant feature transform ( n -SIFT) method for extracting and matching salient features from scalar images of arbitrary dimensionality, and compare this ...method's performance to other related features. The proposed features extend the concepts used for 2-D scalar images in the computer vision SIFT technique for extracting and matching distinctive scale invariant features. We apply the features to images of arbitrary dimensionality through the use of hyperspherical coordinates for gradients and multidimensional histograms to create the feature vectors. We analyze the performance of a fully automated multimodal medical image matching technique based on these features, and successfully apply the technique to determine accurate feature point correspondence between pairs of 3-D MRI images and dynamic 3D + time CT data.
Optical coherence tomography (OCT) is a noninvasive, depth-resolved imaging modality that has become a prominent ophthalmic diagnostic technique. We present a semi-automated segmentation algorithm to ...detect intra-retinal layers in OCT images acquired from rodent models of retinal degeneration. We adapt Chan-Vese's energy-minimizing active contours without edges for the OCT images, which suffer from low contrast and are highly corrupted by noise. A multiphase framework with a circular shape prior is adopted in order to model the boundaries of retinal layers and estimate the shape parameters using least squares. We use a contextual scheme to balance the weight of different terms in the energy functional. The results from various synthetic experiments and segmentation results on OCT images of rats are presented, demonstrating the strength of our method to detect the desired retinal layers with sufficient accuracy even in the presence of intensity inhomogeneity resulting from blood vessels. Our algorithm achieved an average Dice similarity coefficient of 0.84 over all segmented retinal layers, and of 0.94 for the combined nerve fiber layer, ganglion cell layer, and inner plexiform layer which are the critical layers for glaucomatous degeneration.
•Visual phenotypes of cannabis trichomes are characterized in the context of trichome maturation during flower development for the first time.•Trichome gland color may describe optimal time to ...harvest.•Trichome phenotypes are strain agnostic.•Trichome morphology metrics describe flower tissue development and trichome stalk elongation.•Trichome gland phenotypes are differentiated by trichome gland fluorescence.
Cannabis (Cannabis sativa L.) is cultivated by licensed producers in Canada for medicinal and recreational uses. The recent legalization of this plant in 2018 has resulted in rapid expansion of the industry, with greenhouse production representing the most common method of cultivation. Female cannabis plants produce inflorescences that contain bracts densely covered by glandular trichomes, which synthesize a range of commercially important cannabinoids (e.g., THC, CBD) as well as terpenes. Cannabinoid content and quality varies over the 8-week flowering period to such an extent that the time of harvest can significantly impact product quality. Cannabis flower maturation is accompanied by a transition in the color of trichome heads that progresses from clear to milky to brown (amber) and can be seen visually using low magnification. However, the importance of this transition as it impacts quality and describes maturity has never been investigated. To establish a relationship between trichome maturation and trichome head color changes (phenotype), we developed a novel automatic trichome gland analysis pipeline using deep learning. We first collected a macro-photography dataset based on 4 commercially grown cannabis strains, namely 'Afghan Kush', 'Green Death Bubba', 'Pink Kush', and 'White Rhino'. Images were obtained in two modalities: conventional macroscopic light photography and macroscopic UV induced fluorescence. We then implemented a pipeline where the clear-milky-brown heuristic was injected into the algorithm to quantify trichome phenotype progression during the 8-week flowering period. A series of clear, milky, and brown phenotype curves were recorded for each strain over the flowering period that were validated as indicators of trichome maturation and corresponded to previously described parameters of trichome development, such as trichome gland head diameter and stalk elongation. We also derived morphological metrics describing trichome gland geometry from deep learning segmentation predictions that profiled trichome maturation over the flowering period. We observed that mature and senescing trichomes displayed fluorescent properties that were reflected in the clear, milky, and brown phenotypes. Our method was validated by two experiments where factors affecting trichome quality and flower development were imposed and the effects were then quantified using the deep learning pipeline. Our results indicate the feasibility of automated trichome analysis as a method to evaluate the maturation of female flowers cultivated in a highly variable environment, regardless of strain. These findings have broad applicability in a growing industry in which cannabis flower quality is receiving increased circumspection for medicinal and recreational uses.
Multi-fraction cervical cancer brachytherapy is a form of image-guided radiotherapy that heavily relies on 3D imaging during treatment planning, delivery, and quality control. In this context, ...deformable image registration can increase the accuracy of dosimetric evaluations, provided that one can account for the uncertainties associated with the registration process. To enable such capability, we propose a mathematical framework that first estimates the registration uncertainty and subsequently propagates the effects of the computed uncertainties from the registration stage through to the visualizations, organ segmentations, and dosimetric evaluations. To ensure the practicality of our proposed framework in real world image-guided radiotherapy contexts, we implemented our technique via a computationally efficient and generalizable algorithm that is compatible with existing deformable image registration software. In our clinical context of fractionated cervical cancer brachytherapy, we perform a retrospective analysis on 37 patients and present evidence that our proposed methodology for computing and propagating registration uncertainties may be beneficial during therapy planning and quality control. Specifically, we quantify and visualize the influence of registration uncertainty on dosimetric analysis during the computation of the total accumulated radiation dose on the bladder wall. We further show how registration uncertainty may be leveraged into enhanced visualizations that depict the quality of the registration and highlight potential deviations from the treatment plan prior to the delivery of radiation treatment. Finally, we show that we can improve the transfer of delineated volumetric organ segmentation labels from one fraction to the next by encoding the computed registration uncertainties into the segmentation labels.
We extend the well-known scalar image bilateral filtering technique to diffusion tensor magnetic resonance images (DTMRI). The scalar version of bilateral image filtering is extended to perform ...edge-preserving smoothing of DT field data. The bilateral DT filtering is performed in the log-Euclidean framework which guarantees valid output tensors. Smoothing is achieved by weighted averaging of neighboring tensors. Analogous to bilateral filtering of scalar images, the weights are chosen to be inversely proportional to two distance measures: The geometrical Euclidean distance between the spatial locations of tensors and the dissimilarity of tensors. We describe the noniterative DT smoothing equation in closed form and show how interpolation of DT data is treated as a special case of bilateral filtering where only spatial distance is used. We evaluate different DT tensor dissimilarity metrics including the log-Euclidean, the similarity-invariant log-Euclidean, the square root of the J-divergence, and the distance scaled mutual diffusion coefficient. We present qualitative and quantitative smoothing and interpolation results and show their effect on segmentation, for both synthetic DT field data, as well as real cardiac and brain DTMRI data.
A method for visualizing manifold-valued medical image data is proposed. The method operates on images in which each pixel is assumed to be sampled from an underlying manifold. For example, each ...pixel may contain a high dimensional vector, such as the time activity curve (TAC) in a dynamic positron emission tomography (dPET) or a dynamic single photon emission computed tomography (dSPECT) image, or the positive semi-definite tensor in a diffusion tensor magnetic resonance image (DTMRI). A nonlinear mapping reduces the dimensionality of the pixel data to achieve two goals: distance preservation and embedding into a perceptual color space. We use multidimensional scaling distance-preserving mapping to render similar pixels (e.g., DT or TAC pixels) with perceptually similar colors. The 3D CIELAB perceptual color space is adopted as the range of the distance preserving mapping, with a final similarity transform mapping colors to a maximum gamut size. Similarity between pixels is either determined analytically as geodesics on the manifold of pixels or is approximated using manifold learning techniques. In particular, dissimilarity between DTMRI pixels is evaluated via a Log-Euclidean Riemannian metric respecting the manifold of the rank 3, second-order positive semi-definite DTs, whereas the dissimilarity between TACs is approximated via ISOMAP. We demonstrate our approach via artificial high-dimensional, manifold-valued data, as well as case studies of normal and pathological clinical brain and heart DTMRI, dPET, and dSPECT images. Our results demonstrate the effectiveness of our approach in capturing, in a perceptually meaningful way, important features in the data.
The use of functional imaging in radiotherapy treatment (RT) planning requires accurate co-registration of functional imaging scans to CT scans. We evaluated six methods of image registration for use ...in SPECT-guided radiotherapy treatment planning. Methods varied in complexity from 3D affine transform based on control points to diffeomorphic demons and level set non-rigid registration. Ten lung cancer patients underwent perfusion SPECT-scans prior to their radiotherapy. CT images from a hybrid SPECT/CT scanner were registered to a planning CT, and then the same transformation was applied to the SPECT images. According to registration evaluation measures computed based on the intensity difference between the registered CT images or based on target registration error, non-rigid registrations provided a higher degree of accuracy than rigid methods. However, due to the irregularities in some of the obtained deformation fields, warping the SPECT using these fields may result in unacceptable changes to the SPECT intensity distribution that would preclude use in RT planning. Moreover, the differences between intensity histograms in the original and registered SPECT image sets were the largest for diffeomorphic demons and level set methods. In conclusion, the use of intensity-based validation measures alone is not sufficient for SPECT/CT registration for RTTP. It was also found that the proper evaluation of image registration requires the use of several accuracy metrics.
We explore the application of genetic algorithms (GA) to deformable models through the proposition of a novel method for medical image segmentation that combines GA with nonconvex, localized, ...medial-based shape statistics. We replace the more typical gradient descent optimizer used in deformable models with GA, and the convex, implicit, global shape statistics with nonconvex, explicit, localized ones. Specifically, we propose GA to reduce typical deformable model weaknesses pertaining to model initialization, pose estimation and local minima, through the simultaneous evolution of a large number of models. Furthermore, we constrain the evolution, and thus reduce the size of the search-space, by using statistically-based deformable models whose deformations are intuitive (stretch, bulge, bend) and are driven in terms of localized principal modes of variation, instead of modes of variation across the entire shape that often fail to capture localized shape changes. Although GA are not guaranteed to achieve the global optima, our method compares favorably to the prevalent optimization techniques, convex/nonconvex gradient-based optimizers and to globally optimal graph-theoretic combinatorial optimization techniques, when applied to the task of corpus callosum segmentation in 50 mid-sagittal brain magnetic resonance images.
Purpose:
To quantitatively evaluate the accuracy of several SPECT/CT image registration methods in recent studies and its impact on the functional lung volume segmentation in SPECT guided radiation ...therapy (RT) treatment planning.
Methods and Materials:
Five lung cancer patients were consented to have a perfusion SPECT scan with 99mTc‐macroaggregated albumin. During the scan, a low resolution CT image was acquired using the SPECT/CT scanner. This CT scan was co‐registered to the patient's planning CT scan through four rigid and deformable image registration programs (rigid registration, skin/lung control points based registration and B‐spline deformable registration). After the CT to CT co‐registration, original SPECT reconstructions were warped and co‐registered to the planning CT scan. The functional lung volumes were segmented from each deformed SPECT using 10, 20, …, 90% of maximum pixel value as a threshold. The differences in the size and contours of each functional volume were calculated.
Results:
Based on the evaluation of registered CT images, the result from B‐spline registration demonstrated the smallest intensity difference. Using the warped SPECT images obtained from this registration method as a reference, the smallest difference in the size and contour of functional volumes was found using rigid registration. In the point‐based registrations, a better result was found when the control points were placed on lung volume instead of body contour.
Conclusion:
Apply B‐spline based image registration method in SPECT‐guided RT studies was shown to be accurate. Point‐based image registration using skin markers with a standalone SPECT scanner was found least accurate.