In this retrospective study, we aimed to predict the body height and weight of pediatric patients using CT localizers, which are overview scans performed before the acquisition of the CT. We trained ...three commonly used networks (EfficientNetV2-S, ResNet-18, and ResNet-34) on a cohort of 1009 and 1111 CT localizers of pediatric patients with recorded body height and weight (between January 2013 and December 2019) and validated them in an additional cohort of 116 and 127 localizers (acquired in 2020). The best-performing model was then tested in an independent cohort of 203 and 225 CT localizers (acquired between January 2021 and March 2023). In addition, a cohort of 1401 and 1590 localizers from younger adults (acquired between January 2013 and December 2013) was added to the training set to determine if it could improve the overall accuracy. The EfficientNetV2-S using the additional adult cohort performed best with a mean absolute error of 5.58 ± 4.26 cm for height and 4.25 ± 4.28 kg for weight. The relative error was 4.12 ± 4.05% for height and 11.28 ± 12.05% for weight. Our study demonstrated that automated estimation of height and weight in pediatric patients from CT localizers can be performed.
An important quality criterion for radiographs is the correct anatomical side marking. A deep neural network is evaluated to predict the correct anatomical side in radiographs of the knee acquired in ...anterior-posterior direction. In this retrospective study, a ResNet-34 network was trained on 2892 radiographs from 2540 patients to predict the anatomical side of knees in radiographs. The network was evaluated in an internal validation cohort of 932 radiographs of 816 patients and in an external validation cohort of 490 radiographs from 462 patients. The network showed an accuracy of 99.8% and 99.9% on the internal and external validation cohort, respectively, which is comparable to the accuracy of radiographers. Anatomical side in radiographs of the knee in anterior-posterior direction can be deduced from radiographs with high accuracy using deep learning.
Objectives
Age estimation, especially in pediatric patients, is regularly used in different contexts ranging from forensic over medicolegal to clinical applications. A deep neural network has been ...developed to automatically estimate chronological age from knee radiographs in pediatric patients.
Methods
In this retrospective study, 3816 radiographs of the knee from pediatric patients from a German population (acquired between January 2008 and December 2018) were collected to train a neural network. The network was trained to predict chronological age from the knee radiographs and was evaluated on an independent validation cohort of 423 radiographs (acquired between January 2019 and December 2020) and on an external validation cohort of 197 radiographs.
Results
The model showed a mean absolute error of 0.86 ± 0.72 years and 0.9 ± 0.71 years on the internal and external validation cohorts, respectively. Separating age classes (< 14 years from ≥ 14 years and < 18 years from ≥ 18 years) showed AUCs between 0.94 and 0.98.
Conclusions
The chronological age of pediatric patients can be estimated with good accuracy from radiographs of the knee using a deep neural network.
Key Points
•
Radiographs of the knee can be used for age estimations in pediatric patients using a standard deep neural network.
•
The network showed a mean absolute error of 0.86 ± 0.72 years in an internal validation cohort and of 0.9 ± 0.71 years in an external validation cohort.
•
The network can be used to separate the age classes < 14 years from
≥
14 years with an AUC of 0.97 and < 18 years from
≥
18 years with an AUC of 0.94.
For CT pulmonary angiograms, a scout view obtained in anterior-posterior projection is usually used for planning. For bolus tracking the radiographer manually locates a position in the CT scout view ...where the pulmonary trunk will be visible in an axial CT pre-scan. We automate the task of localizing the pulmonary trunk in CT scout views by deep learning methods. In 620 eligible CT scout views of 563 patients between March 2003 and February 2020 the region of the pulmonary trunk as well as an optimal slice ("reference standard") for bolus tracking, in which the pulmonary trunk was clearly visible, was annotated and used to train a U-Net predicting the region of the pulmonary trunk in the CT scout view. The networks' performance was subsequently evaluated on 239 CT scout views from 213 patients and was compared with the annotations of three radiographers. The network was able to localize the region of the pulmonary trunk with high accuracy, yielding an accuracy of 97.5% of localizing a slice in the region of the pulmonary trunk on the validation cohort. On average, the selected position had a distance of 5.3 mm from the reference standard. Compared to radiographers, using a non-inferiority test (one-sided, paired Wilcoxon rank-sum test) the network performed as well as each radiographer (P < 0.001 in all cases). Automated localization of the region of the pulmonary trunk in CT scout views is possible with high accuracy and is non-inferior to three radiographers.
Dieser Artikel befasst sich mit den Auswirkungen der Künstlichen Intelligenz (KI)
auf den Beruf der Medizinischen Technologinnen und Technologen für Radiologie
(MTR). Die KI könnte entlang des ...Patientenbehandlungspfades in sämtlichen
Bereichen der Radiologie MTR unterstützen und entlasten. Durch KI könnte sich
der MTR-Beruf in Tätigkeitsbereiche unterteilen, die sich in patientennahe und
patientenferne Tätigkeiten gliedern. In Zukunft könnten MTR, die Expertise im
Bereich von KI besitzen, supervisorische Tätigkeiten ausführen, während MTR, die
sich nicht mit KI beschäftigen, patientennahe Tätigkeiten wie die Betreuung und
Lagerung von Patienten durchführen. Es ist absehbar, dass KI in naher Zukunft
einige Aufgaben der MTR übernehmen wird und in ferner Zukunft autonom
Untersuchungen durchführen wird. Um den Beruf des MTR zukunftsfähig zu
gestalten, sollten MTR eine Strategie entwickeln und aktiv an der Entwicklung
mitwirken.
Purpose
The aim of the study was to prospectively compare the diagnostic value of whole-body diffusion-weighted imaging (DWI) and FDG PET/CT for breast cancer (BC) staging.
Methods
Twenty BC patients ...underwent whole-body FDG PET/CT and 1.5-T DWI. Lesions with qualitatively elevated signal intensity on DW images (b = 800 s/mm
2
) were rated as suspicious for tumour and mapped to individual lesions and different compartments (overall 552 lesions). The apparent diffusion coefficient (ADC) value was determined for quantitative evaluation. Histopathology, MRI findings, bone scan findings, concordant findings between FDG PET/CT and DWI, CT follow-up scans and plausibility served as the standards of reference defining malignancy.
Results
According to the standards of reference, breasts harboured malignancy in 11, regional lymph nodes in 4, M1 lymph nodes in 3, bone in 7, lung in 2, liver in 3 and other tissues in 3 patients. On a compartment basis, the sensitivity, specificity, accuracy, positive predictive value (PPV) and negative predictive value (NPV) for the detection of malignancies were 94, 99, 98, 97 and 98% for FDG PET/CT and 91, 72, 76, 50 and 96% for DWI, respectively. Of the lesions seen on DWI only, 348 (82%) turned out to be false-positive compared to 23 (11%) on FDG PET/CT. The average lesion ADC was 820 ± 300 with true-positive lesions having 929 ± 252 vs 713 ± 305 in false-positive lesions (
p
< 0.0001).
Conclusion
Based on these initial data DWI seems to be a sensitive but unspecific modality for the detection of locoregional or metastatic BC disease. There was no possibility to quantitatively distinguish lesions using ADC. DWI alone may not be recommended as a whole-body staging alternative to FDG PET(/CT). Further studies are necessary addressing the question of whether full-body MRI including DWI may become an alternative to FDG PET/CT for whole-body breast cancer staging.