Objectives
Body tissue composition is a long-known biomarker with high diagnostic and prognostic value not only in cardiovascular, oncological, and orthopedic diseases but also in rehabilitation ...medicine or drug dosage. In this study, the aim was to develop a fully automated, reproducible, and quantitative 3D volumetry of body tissue composition from standard CT examinations of the abdomen in order to be able to offer such valuable biomarkers as part of routine clinical imaging.
Methods
Therefore, an in-house dataset of 40 CTs for training and 10 CTs for testing were fully annotated on every fifth axial slice with five different semantic body regions: abdominal cavity, bones, muscle, subcutaneous tissue, and thoracic cavity. Multi-resolution U-Net 3D neural networks were employed for segmenting these body regions, followed by subclassifying adipose tissue and muscle using known Hounsfield unit limits.
Results
The Sørensen Dice scores averaged over all semantic regions was 0.9553 and the intra-class correlation coefficients for subclassified tissues were above 0.99.
Conclusions
Our results show that fully automated body composition analysis on routine CT imaging can provide stable biomarkers across the whole abdomen and not just on L3 slices, which is historically the reference location for analyzing body composition in the clinical routine.
Key Points
• Our study enables fully automated body composition analysis on routine abdomen CT scans.
• The best segmentation models for semantic body region segmentation achieved an averaged Sørensen Dice score of 0.9553.
• Subclassified tissue volumes achieved intra-class correlation coefficients over 0.99.
Besides cardiac sarcoidosis, FDG-PET is rarely used in the diagnosis of myocardial inflammation, while cardiac MRI (CMR) is the actual imaging reference for the workup of myocarditis. Using ...integrated PET/MRI in patients with suspected myocarditis, we prospectively compared FDG-PET to CMR and the feasibility of integrated FDG-PET/MRI in myocarditis.
A total of 65 consecutive patients with suspected myocarditis were prospectively assessed using integrated cardiac FDG-PET/MRI. Studies comprised T2-weighted imaging, late gadolinium enhancement (LGE), and simultaneous PET acquisition. Physiological glucose uptake in the myocardium was suppressed using dietary preparation.
FDG-PET/MRI was successful in 55 of 65 enrolled patients: two patients were excluded due to claustrophobia and eight patients due to failed inhibition of myocardial glucose uptake. Compared with CMR (LGE and/or T2), sensitivity and specificity of PET was 74% and 97%. Overall spatial agreement between PET and CMR was κ = 0.73. Spatial agreement between PET and T2 (κ = 0.75) was higher than agreement between PET and LGE (κ = 0.64) as well as between LGE and T2 (κ = 0.56).
In patients with suspected myocarditis, FDG-PET is in good agreement with CMR findings.
The number of images taken per patient scan has rapidly increased due to advances in software, hardware and digital imaging in the medical domain. There is the need for medical image annotation ...systems that are accurate as manual annotation is impractical, time-consuming and prone to errors. This paper presents modeling approaches performed to automatically classify and annotate radiographs using several classification schemes, which can be further applied for automatic content-based image retrieval (CBIR) and computer-aided diagnosis (CAD). Different image preprocessing and enhancement techniques were applied to augment grayscale radiographs by virtually adding two extra layers. The Image Retrieval in Medical Applications (IRMA) Code, a mono-hierarchical multi-axial code, served as a basis for this work. To extensively evaluate the image enhancement techniques, five classification schemes including the complete IRMA code were adopted. The deep convolutional neural network systems Inception-v3 and Inception-ResNet-v2, and Random Forest models with 1000 trees were trained using extracted Bag-of-Keypoints visual representations. The classification model performances were evaluated using the ImageCLEF 2009 Medical Annotation Task test set. The applied visual enhancement techniques proved to achieve better annotation accuracy in all classification schemes.
Fully integrated positron emission tomography (PET)/magnetic resonance imaging (MRI) scanners have been available for a few years. Since then, the number of scanner installations and published ...studies have been growing. While feasibility of integrated PET/MRI has been demonstrated for many clinical and preclinical imaging applications, now those applications where PET/MRI provides a clear benefit in comparison to the established reference standards need to be identified. The current data show that those particular applications demanding multiparametric imaging capabilities, high soft tissue contrast and/or lower radiation dose seem to benefit from this novel hybrid modality. Promising results have been obtained in whole-body cancer staging in non-small cell lung cancer and multiparametric tumor imaging. Furthermore, integrated PET/MRI appears to have added value in oncologic applications requiring high soft tissue contrast such as assessment of liver metastases of neuroendocrine tumors or prostate cancer imaging. Potential benefit of integrated PET/MRI has also been demonstrated for cardiac (i.e., myocardial viability, cardiac sarcoidosis) and brain (i.e., glioma grading, Alzheimer's disease) imaging, where MRI is the predominant modality. The lower radiation dose compared to PET/computed tomography will be particularly valuable in the imaging of young patients with potentially curable diseases.However, further clinical studies and technical innovation on scanner hard- and software are needed. Also, agreements on adequate refunding of PET/MRI examinations need to be reached. Finally, the translation of new PET tracers from preclinical evaluation into clinical applications is expected to foster the entire field of hybrid PET imaging, including PET/MRI.
Objectives
To reduce the dose of intravenous iodine-based contrast media (ICM) in CT through virtual contrast-enhanced images using generative adversarial networks.
Methods
Dual-energy CTs in the ...arterial phase of 85 patients were randomly split into an 80/20 train/test collective. Four different generative adversarial networks (GANs) based on image pairs, which comprised one image with virtually reduced ICM and the original full ICM CT slice, were trained, testing two input formats (2D and 2.5D) and two reduced ICM dose levels (−50% and −80%). The amount of intravenous ICM was reduced by creating virtual non-contrast series using dual-energy and adding the corresponding percentage of the iodine map. The evaluation was based on different scores (L1 loss, SSIM, PSNR, FID), which evaluate the image quality and similarity. Additionally, a visual Turing test (VTT) with three radiologists was used to assess the similarity and pathological consistency.
Results
The −80% models reach an SSIM of > 98%, PSNR of > 48, L1 of between 7.5 and 8, and an FID of between 1.6 and 1.7. In comparison, the −50% models reach a SSIM of > 99%, PSNR of > 51, L1 of between 6.0 and 6.1, and an FID between 0.8 and 0.95. For the crucial question of pathological consistency, only the 50% ICM reduction networks achieved 100% consistency, which is required for clinical use.
Conclusions
The required amount of ICM for CT can be reduced by 50% while maintaining image quality and diagnostic accuracy using GANs. Further phantom studies and animal experiments are required to confirm these initial results.
Key Points
•
The amount of contrast media required for CT can be reduced by 50% using generative adversarial networks
.
•
Not only the image quality but especially the pathological consistency must be evaluated to assess safety
.
•
A too pronounced contrast media reduction could influence the pathological consistency in our collective at 80%
.
Digital histopathology poses several challenges such as label noise, class imbalance, limited availability of labelled data, and several latent biases to deep learning, negatively influencing ...transparency, reproducibility, and classification performance. In particular, biases are well known to cause poor generalization. Proposed tools from explainable artificial intelligence (XAI), bias detection, and bias discovery suffer from technical challenges, complexity, unintuitive usage, inherent biases, or a semantic gap. A promising XAI method, not studied in the context of digital histopathology is automated concept-based explanation (ACE). It automatically extracts visual concepts from image data. Our objective is to evaluate ACE’s technical validity following design science principals and to compare it to Guided Gradient-weighted Class Activation Mapping (Grad-CAM), a conventional pixel-wise explanation method. To that extent, we created and studied five convolutional neural networks (CNNs) in four different skin cancer settings. Our results demonstrate that ACE is a valid tool for gaining insights into the decision process of histopathological CNNs that can go beyond explanations from the control method. ACE validly visualized a class sampling ratio bias, measurement bias, sampling bias, and class-correlated bias. Furthermore, the complementary use with Guided Grad-CAM offers several benefits. Finally, we propose practical solutions for several technical challenges. In contradiction to results from the literature, we noticed lower intuitiveness in some dermatopathology scenarios as compared to concept-based explanations on real-world images.
Recently, radiomics has emerged as a non-invasive, imaging-based tissue characterization method in multiple cancer types. One limitation for robust and reproducible analysis lies in the inter-reader ...variability of the tumor annotations, which can potentially cause differences in the extracted feature sets and results. In this study, the diagnostic potential of a rapid and clinically feasible VOI (Volume of Interest)-based approach to radiomics is investigated to assess MR-derived parameters for predicting molecular subtype, hormonal receptor status, Ki67- and HER2-Expression, metastasis of lymph nodes and lymph vessel involvement as well as grading in patients with breast cancer.
A total of 98 treatment-naïve patients (mean 59.7 years, range 28.0-89.4) with BI-RADS 5 and 6 lesions who underwent a dedicated breast MRI prior to therapy were retrospectively included in this study. The imaging protocol comprised dynamic contrast-enhanced T1-weighted imaging and T2-weighted imaging. Tumor annotations were obtained by drawing VOIs around the primary tumor lesions followed by thresholding. From each segmentation, 13.118 quantitative imaging features were extracted and analyzed with machine learning methods. Validation was performed by 5-fold cross-validation with 25 repeats.
Predictions for molecular subtypes obtained AUCs of 0.75 (HER2-enriched), 0.73 (triple-negative), 0.65 (luminal A) and 0.69 (luminal B). Differentiating subtypes from one another was highest for HER2-enriched vs triple-negative (AUC 0.97), followed by luminal B vs triple-negative (0.86). Receptor status predictions for Estrogen Receptor (ER), Progesterone Receptor (PR) and Hormone receptor positivity yielded AUCs of 0.67, 0.69 and 0.69, while Ki67 and HER2 Expressions achieved 0.81 and 0.62. Involvement of the lymph vessels could be predicted with an AUC of 0.8, while lymph node metastasis yielded an AUC of 0.71. Models for grading performed similar with an AUC of 0.71 for Elston-Ellis grading and 0.74 for the histological grading.
Our preliminary results of a rapid approach to VOI-based tumor-annotations for radiomics provides comparable results to current publications with the perks of clinical suitability, enabling a comprehensive non-invasive platform for breast tumor decoding and phenotyping.
The aim of this study was to evaluate and quantify the effect of improved attenuation correction (AC) including bone segmentation and truncation correction on 18F-Fluordesoxyglucose cardiac positron ...emission tomography/magnetic resonance (PET/MR) imaging.
PET data of 32 cardiac PET/MR datasets were reconstructed with three different AC-maps (1. Dixon-VIBE only, 2. HUGE truncation correction and bone segmentation, 3. MLAA). The Dixon-VIBE AC-maps served as reference of reconstructed PET data. 17-segment short-axis polar plots of the left ventricle were analyzed regarding the impact of each of the three AC methods on PET quantification in cardiac PET/MR imaging. Non-AC PET images were segmented to specify the amount of truncation in the Dixon-VIBE AC-map serving as a reference. All AC-maps were evaluated for artifacts.
Using HUGE + bone AC results in a homogeneous gain of ca. 6% and for MLAA 8% of PET signal distribution across the myocardium of the left ventricle over all patients compared to Dixon-VIBE AC only. Maximal relative differences up to 18% were observed in segment 17 (apex). The body volume truncation of -12.7 ± 7.1% compared to the segmented non-AC PET images using the Dixon-VIBE AC method was reduced to -1.9 ± 3.9% using HUGE and 7.8 ± 8.3% using MLAA. In each patient, a systematic overestimation in AC-map volume was observed when applying MLAA. Quantitative impact of artifacts showed regional differences up to 6% within single segments of the myocardium.
Improved AC including bone segmentation and truncation correction in cardiac PET/MR imaging is important to ensure best possible diagnostic quality and PET quantification. The results exhibited an overestimation of AC-map volume using MLAA, while HUGE resulted in a more realistic body contouring. Incorporation of bone segmentation into the Dixon-VIBE AC-map resulted in homogeneous gain in PET signal distribution across the myocardium. The majority of observed AC-map artifacts did not significantly affect the quantitative assessment of the myocardium.
Detection of ossification areas of hand bones in X-ray images is an important task, e.g. as a preprocessing step in automated bone age estimation. Deep neural networks have emerged recently as de ...facto standard detection methods, but their drawback is the need of large annotated datasets. Finetuning pre-trained networks is a viable alternative, but it is not clear a priori if training with small annotated datasets will be successful, as it depends on the problem at hand. In this paper, we show that pre-trained networks can be utilized to produce an effective detector of ossification areas in pediatric X-ray images of hands.
A publicly available Faster R-CNN network, pre-trained on the COCO dataset, was utilized and finetuned with 240 manually annotated radiographs from the RSNA Pediatric Bone Age Challenge, which comprises over 14.000 pediatric radiographs. The validation is done on another 89 radiographs from the dataset and the performance is measured by Intersection-over-Union (IoU). To understand the effect of the data size on the pre-trained network, subsampling was applied to the training data and the training was repeated. Additionally, the network was trained from scratch without any pre-trained weights. Finally, to understand whether the trained model could be useful, we compared the inference of the network to an annotation of an expert radiologist. The finetuned network was able to achieve an average precision (mAP@0.5IoU) of 92.92 ± 1.93. Apart from the wrist region, all ossification areas were able to benefit from more data. In contrast, the network trained from scratch was not able to produce any correct results. When compared to the annotations of the expert radiologist, the network was able to localize the regions quite well, as the F1-Score was on average 91.85 ± 1.06.
By finetuning a pre-trained deep neural network, with 240 annotated radiographs, we were able to successfully detect ossification areas in prediatric hand radiographs.