Artificial intelligence (AI) represents a broad category of algorithms for which deep learning is currently the most impactful. When electing to begin the process of building an adequate fundamental ...knowledge base allowing them to decipher machine learning research and algorithms, clinical musculoskeletal radiologists currently have few options to turn to. In this article, we provide an introduction to the vital terminology to understand, how to make sense of data splits and regularization, an introduction to the statistical analyses used in AI research, a primer on what deep learning can or cannot do, and a brief overview of clinical integration methods. Our goal is to improve the readers’ understanding of this field.
Full text
Available for:
EMUNI, FIS, FZAB, GEOZS, GIS, IJS, IMTLJ, KILJ, KISLJ, MFDPS, NLZOH, NUK, OILJ, PNG, SAZU, SBCE, SBJE, SBMB, SBNM, UKNU, UL, UM, UPUK, VKSCE, VSZLJ, ZAGLJ
Artificial intelligence (AI) and deep learning have multiple potential uses in aiding the musculoskeletal radiologist in the radiological evaluation of orthopedic implants. These include ...identification of implants, characterization of implants according to anatomic type, identification of specific implant models, and evaluation of implants for positioning and complications. In addition, natural language processing (NLP) can aid in the acquisition of clinical information from the medical record that can help with tasks like prepopulating radiology reports. Several proof-of-concept works have been published in the literature describing the application of deep learning toward these various tasks, with performance comparable to that of expert musculoskeletal radiologists. Although much work remains to bring these proof-of-concept algorithms into clinical deployment, AI has tremendous potential toward automating these tasks, thereby augmenting the musculoskeletal radiologist.
Full text
Available for:
EMUNI, FIS, FZAB, GEOZS, GIS, IJS, IMTLJ, KILJ, KISLJ, MFDPS, NLZOH, NUK, OILJ, PNG, SAZU, SBCE, SBJE, SBMB, SBNM, UKNU, UL, UM, UPUK, VKSCE, VSZLJ, ZAGLJ
To use deep learning with advanced data augmentation to accurately diagnose and classify femoral neck fractures. A retrospective study of patients with femoral neck fractures was performed. One ...thousand sixty-three AP hip radiographs were obtained from 550 patients. Ground truth labels of Garden fracture classification were applied as follows: (1) 127 Garden I and II fracture radiographs, (2) 610 Garden III and IV fracture radiographs, and (3) 326 normal hip radiographs. After localization by an initial network, a second CNN classified the images as Garden I/II fracture, Garden III/IV fracture, or no fracture. Advanced data augmentation techniques expanded the training set: (1) generative adversarial network (GAN); (2) digitally reconstructed radiographs (DRRs) from preoperative hip CT scans. In all, 9063 images, real and generated, were available for training and testing. A deep neural network was designed and tuned based on a 20% validation group. A holdout test dataset consisted of 105 real images, 35 in each class. Two class prediction of fracture versus no fracture (AUC 0.92): accuracy 92.3%, sensitivity 0.91, specificity 0.93, PPV 0.96, NPV 0.86. Three class prediction of Garden I/II, Garden III/IV, or normal (AUC 0.96): accuracy 86.0%, sensitivity 0.79, specificity 0.90, PPV 0.80, NPV 0.90. Without any advanced augmentation, the AUC for two-class prediction was 0.80. With DRR as the only advanced augmentation, AUC was 0.91 and with GAN only AUC was 0.87. GANs and DRRs can be used to improve the accuracy of a tool to diagnose and classify femoral neck fractures.
Full text
Available for:
NUK, OBVAL, SBMB, SBNM, UL, UM, UPUK, VSZLJ
The computed tomography (CT) pattern of definite or probable usual interstitial pneumonia (UIP) can be diagnostic of idiopathic pulmonary fibrosis and may obviate the need for invasive surgical ...biopsy. Few machine-learning studies have investigated the classification of interstitial lung disease (ILD) on CT imaging, but none have used histopathology as a reference standard.
To predict histopathologic UIP using deep learning of high-resolution computed tomography (HRCT).
Institutional databases were retrospectively searched for consecutive patients with ILD, HRCT, and diagnostic histopathology from 2011 to 2014 (training cohort) and from 2016 to 2017 (testing cohort). A blinded expert radiologist and pulmonologist reviewed all training HRCT scans in consensus and classified HRCT scans based on the 2018 American Thoracic Society/European Respriatory Society/Japanese Respiratory Society/Latin American Thoracic Association diagnostic criteria for idiopathic pulmonary fibrosis. A convolutional neural network (CNN) was built accepting 4 × 4 × 2 cm virtual wedges of peripheral lung on HRCT as input and outputting the UIP histopathologic pattern. The CNN was trained and evaluated on the training cohort using fivefold cross validation and was then tested on the hold-out testing cohort. CNN and human performance were compared in the training cohort. Logistic regression and survival analyses were performed.
The CNN was trained on 221 patients (median age 60 yr; interquartile range IQR, 53-66), including 71 patients (32%) with UIP or probable UIP histopathologic patterns. The CNN was tested on a separate hold-out cohort of 80 patients (median age 66 yr; IQR, 58-69), including 22 patients (27%) with UIP or probable UIP histopathologic patterns. An average of 516 wedges were generated per patient. The percentage of wedges with CNN-predicted UIP yielded a cross validation area under the curve of 74% for histopathological UIP pattern per patient. The optimal cutoff point for classifying patients on the training cohort was 16.5% of virtual lung wedges with CNN-predicted UIP and resulted in sensitivity and specificity of 74% and 58%, respectively, in the testing cohort. CNN-predicted UIP was associated with an increased risk of death or lung transplantation during cross validation (hazard ratio, 1.5; 95% confidence interval, 1.1-2.2;
= 0.03).
Virtual lung wedge resection in patients with ILD can be used as an input to a CNN for predicting the histopathologic UIP pattern and transplant-free survival.
The rapid spread of coronavirus disease 2019 (COVID-19) revealed significant constraints in critical care capacity. In anticipation of subsequent waves, reliable prediction of disease severity is ...essential for critical care capacity management and may enable earlier targeted interventions to improve patient outcomes. The purpose of this study is to develop and externally validate a prognostic model/clinical tool for predicting COVID-19 critical disease at presentation to medical care.
This is a retrospective study of a prognostic model for the prediction of COVID-19 critical disease where critical disease was defined as ICU admission, ventilation, and/or death. The derivation cohort was used to develop a multivariable logistic regression model. Covariates included patient comorbidities, presenting vital signs, and laboratory values. Model performance was assessed on the validation cohort by concordance statistics. The model was developed with consecutive patients with COVID-19 who presented to University of California Irvine Medical Center in Orange County, California. External validation was performed with a random sample of patients with COVID-19 at Emory Healthcare in Atlanta, Georgia.
Of a total 3208 patients tested in the derivation cohort, 9% (299/3028) were positive for COVID-19. Clinical data including past medical history and presenting laboratory values were available for 29% (87/299) of patients (median age, 48 years range, 21-88 years; 64% 36/55 male). The most common comorbidities included obesity (37%, 31/87), hypertension (37%, 32/87), and diabetes (24%, 24/87). Critical disease was present in 24% (21/87). After backward stepwise selection, the following factors were associated with greatest increased risk of critical disease: number of comorbidities, body mass index, respiratory rate, white blood cell count, % lymphocytes, serum creatinine, lactate dehydrogenase, high sensitivity troponin I, ferritin, procalcitonin, and C-reactive protein. Of a total of 40 patients in the validation cohort (median age, 60 years range, 27-88 years; 55% 22/40 male), critical disease was present in 65% (26/40). Model discrimination in the validation cohort was high (concordance statistic: 0.94, 95% confidence interval 0.87-1.01). A web-based tool was developed to enable clinicians to input patient data and view likelihood of critical disease.
We present a model which accurately predicted COVID-19 critical disease risk using comorbidities and presenting vital signs and laboratory values, on derivation and validation cohorts from two different institutions. If further validated on additional cohorts of patients, this model/clinical tool may provide useful prognostication of critical care needs.
Full text
Available for:
DOBA, IZUM, KILJ, NUK, PILJ, PNG, SAZU, SIK, UILJ, UKNU, UL, UM, UPUK
Objectives
In the postneoadjuvant chemotherapy (NAC) setting, conventional radiographic complete response (rCR) is a poor predictor of pathologic complete response (pCR) of the axilla. We developed a ...convolutional neural network (CNN) algorithm to better predict post-NAC axillary response using a breast MRI dataset.
Methods
An institutional review board-approved retrospective study from January 2009 to June 2016 identified 127 breast cancer patients who: (1) underwent breast MRI before the initiation of NAC; (2) successfully completed Adriamycin/Taxane-based NAC; and (3) underwent surgery, including sentinel lymph node evaluation/axillary lymph node dissection with final surgical pathology data. Patients were classified into pathologic complete response (pCR) of the axilla group and non-pCR group based on surgical pathology. Breast MRI performed before NAC was used. Tumor was identified on first T1 postcontrast images underwent 3D segmentation. A total of 2811 volumetric slices of 127 tumors were evaluated. CNN consisted of 10 convolutional layers, 4 max-pooling layers. Dropout, augmentation and L2 regularization were implemented to prevent overfitting of data.
Results
On final surgical pathology, 38.6% (49/127) of the patients achieved pCR of the axilla (group 1), and 61.4% (78/127) of the patients did not with residual metastasis detected (group 2). For predicting axillary pCR, our CNN algorithm achieved an overall accuracy of 83% (95% confidence interval CI ± 5) with sensitivity of 93% (95% CI ± 6) and specificity of 77% (95% CI ± 4). Area under the ROC curve (0.93, 95% CI ± 0.04).
Conclusions
It is feasible to use CNN architecture to predict post NAC axillary pCR. Larger data set will likely improve our prediction model.
Full text
Available for:
EMUNI, FIS, FZAB, GEOZS, GIS, IJS, IMTLJ, KILJ, KISLJ, MFDPS, NLZOH, NUK, OBVAL, OILJ, PNG, SAZU, SBCE, SBJE, SBMB, SBNM, UKNU, UL, UM, UPUK, VKSCE, ZAGLJ
The aim of this study is to develop a fully automated convolutional neural network (CNN) method for quantification of breast MRI fibroglandular tissue (FGT) and background parenchymal enhancement ...(BPE). An institutional review board-approved retrospective study evaluated 1114 breast volumes in 137 patients using T1 precontrast, T1 postcontrast, and T1 subtraction images. First, using our previously published method of quantification, we manually segmented and calculated the amount of FGT and BPE to establish ground truth parameters. Then, a novel 3D CNN modified from the standard 2D U-Net architecture was developed and implemented for voxel-wise prediction whole breast and FGT margins. In the collapsing arm of the network, a series of 3D convolutional filters of size 3 × 3 × 3 are applied for standard CNN hierarchical feature extraction. To reduce feature map dimensionality, a 3 × 3 × 3 convolutional filter with stride 2 in all directions is applied; a total of 4 such operations are used. In the expanding arm of the network, a series of convolutional transpose filters of size 3 × 3 × 3 are used to up-sample each intermediate layer. To synthesize features at multiple resolutions, connections are introduced between the collapsing and expanding arms of the network. L2 regularization was implemented to prevent over-fitting. Cases were separated into training (80%) and test sets (20%). Fivefold cross-validation was performed. Software code was written in Python using the TensorFlow module on a Linux workstation with NVIDIA GTX Titan X GPU. In the test set, the fully automated CNN method for quantifying the amount of FGT yielded accuracy of 0.813 (cross-validation Dice score coefficient) and Pearson correlation of 0.975. For quantifying the amount of BPE, the CNN method yielded accuracy of 0.829 and Pearson correlation of 0.955. Our CNN network was able to quantify FGT and BPE within an average of 0.42 s per MRI case. A fully automated CNN method can be utilized to quantify MRI FGT and BPE. Larger dataset will likely improve our model.
Full text
Available for:
NUK, OBVAL, SBMB, SBNM, UL, UM, UPUK, VSZLJ
Artificial intelligence (AI) is a broad umbrella term used to encompass a wide variety of subfields dedicated to creating algorithms to perform tasks that mimic human intelligence. As AI development ...grows closer to clinical integration, radiologists will need to become familiar with the principles of artificial intelligence to properly evaluate and use this powerful tool. This series aims to explain certain basic concepts of artificial intelligence, and their applications in medical imaging starting with a concept of overfitting.
•This series aims to explain basic concepts of artificial intelligence (AI), and its applications in medical imaging.•Overfitting means that an AI model has learned in a manner that is mainly applicable to the training data.•Overfitting is a major obstacle for AI technology hindering its generalizability to the overall population.•Overfitting can be minimized by a large training dataset, data augmentation, or techniques such as regularization and dropout.•Before AI algorithms can be incorporated clinically, external validation will be necessary to ensure generalizability.
Full text
Available for:
GEOZS, IJS, IMTLJ, KILJ, KISLJ, NLZOH, NUK, OILJ, PNG, SAZU, SBCE, SBJE, UILJ, UL, UM, UPCLJ, UPUK, ZAGLJ, ZRSKP
The purpose of this study is to evaluate the global trend in artificial intelligence (AI)-based research productivity involving radiology and its subspecialty disciplines.
The United States is the ...global leader in AI radiology publication productivity, accounting for almost half of total radiology AI output. Other countries have increased their productivity. Notably, China has increased its productivity exponentially to close to 20% of all AI publications. The top three most productive radiology subspecialties were neuroradiology, body and chest, and nuclear medicine.
Artificial intelligence (AI) in radiology has gained wide interest due to the development of neural network architectures with high performance in computer vision related tasks. As AI based software ...programs become more integrated into the clinical workflow, radiologists can benefit from better understanding the principles of artificial intelligence. This series aims to explain basic concepts of AI and its applications in medical imaging. In this article, we will review the background of neural network architecture and its application in imaging analysis.
•Artificial intelligence based neural network architectures has yielded high performance in computer vision related tasks.•This article explains certain basic concepts of network architecture and its potential role in medical imaging analysis.•Tailored CNN architectures have been developed for different tasks including classification, detection and segmentation.
Full text
Available for:
GEOZS, IJS, IMTLJ, KILJ, KISLJ, NLZOH, NUK, OILJ, PNG, SAZU, SBCE, SBJE, UILJ, UL, UM, UPCLJ, UPUK, ZAGLJ, ZRSKP