Chest x-ray is a relatively accessible, inexpensive, fast imaging modality that might be valuable in the prognostication of patients with COVID-19. We aimed to develop and evaluate an artificial ...intelligence system using chest x-rays and clinical data to predict disease severity and progression in patients with COVID-19.
We did a retrospective study in multiple hospitals in the University of Pennsylvania Health System in Philadelphia, PA, USA, and Brown University affiliated hospitals in Providence, RI, USA. Patients who presented to a hospital in the University of Pennsylvania Health System via the emergency department, with a diagnosis of COVID-19 confirmed by RT-PCR and with an available chest x-ray from their initial presentation or admission, were retrospectively identified and randomly divided into training, validation, and test sets (7:1:2). Using the chest x-rays as input to an EfficientNet deep neural network and clinical data, models were trained to predict the binary outcome of disease severity (ie, critical or non-critical). The deep-learning features extracted from the model and clinical data were used to build time-to-event models to predict the risk of disease progression. The models were externally tested on patients who presented to an independent multicentre institution, Brown University affiliated hospitals, and compared with severity scores provided by radiologists.
1834 patients who presented via the University of Pennsylvania Health System between March 9 and July 20, 2020, were identified and assigned to the model training (n=1285), validation (n=183), or testing (n=366) sets. 475 patients who presented via the Brown University affiliated hospitals between March 1 and July 18, 2020, were identified for external testing of the models. When chest x-rays were added to clinical data for severity prediction, area under the receiver operating characteristic curve (ROC-AUC) increased from 0·821 (95% CI 0·796–0·828) to 0·846 (0·815–0·852; p<0·0001) on internal testing and 0·731 (0·712–0·738) to 0·792 (0·780–0 ·803; p<0·0001) on external testing. When deep-learning features were added to clinical data for progression prediction, the concordance index (C-index) increased from 0·769 (0·755–0·786) to 0·805 (0·800–0·820; p<0·0001) on internal testing and 0·707 (0·695–0·729) to 0·752 (0·739–0·764; p<0·0001) on external testing. The image and clinical data combined model had significantly better prognostic performance than combined severity scores and clinical data on internal testing (C-index 0·805 vs 0·781; p=0·0002) and external testing (C-index 0·752 vs 0·715; p<0·0001).
In patients with COVID-19, artificial intelligence based on chest x-rays had better prognostic performance than clinical data or radiologist-derived severity scores. Using artificial intelligence, chest x-rays can augment clinical data in predicting the risk of progression to critical illness in patients with COVID-19.
Brown University, Amazon Web Services Diagnostic Development Initiative, Radiological Society of North America, National Cancer Institute and National Institute of Biomedical Imaging and Bioengineering of the National Institutes of Health.
Cranial Nerves: Anatomy, Function and Clinical Significance opens with a summary of the current data on the clinical anatomy and developmental anomalies of the first cranial nerve, the olfactory ...nerve. Following this, the authors provide an overview of the second cranial nerve, the optic nerve, which is a vital component of the visual pathway. The seventh cranial nerve, the facial nerve, which contains the somatic motor and visceral motor, as well as special sensory and general sensory fibers is discussed. The 10th cranial nerve, the vagus nerve, is explored in closing, focusing on its motor functions responsible for the innervations of the outer ear canal, pharynx, larynx, heart, lung, gastrointestinal tract, stomach, pancreas and liver.
To develop a deep learning model to classify primary bone tumors from preoperative radiographs and compare performance with radiologists.
A total of 1356 patients (2899 images) with histologically ...confirmed primary bone tumors and pre-operative radiographs were identified from five institutions’ pathology databases. Manual cropping was performed by radiologists to label the lesions. Binary discriminatory capacity (benign versus not-benign and malignant versus not-malignant) and three-way classification (benign versus intermediate versus malignant) performance of our model were evaluated. The generalizability of our model was investigated on data from external test set. Final model performance was compared with interpretation from five radiologists of varying level of experience using the Permutations tests.
For benign vs. not benign, model achieved area under curve (AUC) of 0•894 and 0•877 on cross-validation and external testing, respectively. For malignant vs. not malignant, model achieved AUC of 0•907 and 0•916 on cross-validation and external testing, respectively. For three-way classification, model achieved 72•1% accuracy vs. 74•6% and 72•1% for the two subspecialists on cross-validation (p = 0•03 and p = 0•52, respectively). On external testing, model achieved 73•4% accuracy vs. 69•3%, 73•4%, 73•1%, 67•9%, and 63•4% for the two subspecialists and three junior radiologists (p = 0•14, p = 0•89, p = 0•93, p = 0•02, p < 0•01 for radiologists 1–5, respectively).
Deep learning can classify primary bone tumors using conventional radiographs in a multi-institutional dataset with similar accuracy compared to subspecialists, and better performance than junior radiologists.
The project described was supported by RSNA Research & Education Foundation, through grant number RSCH2004 to Harrison X. Bai.
To determine the epidemiological profile of patients treated at a philanthropic hospital specialized in Orthopedics and Traumatology, located in a significant urban center, and evaluate the efficacy ...of initial empirical antibiotic treatment.
Patients diagnosed with hand infections from September 2020 to September 2022 were included, excluding cases related to open fractures or post-surgical infections and those with incomplete medical records. The chi-square test was performed using STATISTICA
software to correlate various variables.
A total of 34 patients participated, including 24 men and 10 women, with an average age of 41.9 years. Most male patients had Diabetes Mellitus, HIV, and drug addiction, and they resided in urban areas. Half of the patients did not report any apparent trauma. The most common infectious agent was Staphylococcus aureus*. Nearly 62% of patients required a change in the initial antibiotic regimen, with Penicillin being the most frequently substituted medication. Beta-lactam antibiotics and Quinolones were the most effective.
These results suggest the importance of carefully evaluating the epidemiological profile of patients with acute hand infections and improving initial empirical treatment to ensure appropriate and effective therapy.
While COVID-19 diagnosis and prognosis artificial intelligence models exist, very few can be implemented for practical use given their high risk of bias. We aimed to develop a diagnosis model that ...addresses notable shortcomings of prior studies, integrating it into a fully automated triage pipeline that examines chest radiographs for the presence, severity, and progression of COVID-19 pneumonia. Scans were collected using the DICOM Image Analysis and Archive, a system that communicates with a hospital's image repository. The authors collected over 6,500 non-public chest X-rays comprising diverse COVID-19 severities, along with radiology reports and RT-PCR data. The authors provisioned one internally held-out and two external test sets to assess model generalizability and compare performance to traditional radiologist interpretation. The pipeline was evaluated on a prospective cohort of 80 radiographs, reporting a 95% diagnostic accuracy. The study mitigates bias in AI model development and demonstrates the value of an end-to-end COVID-19 triage platform.
Objectives
There currently lacks a noninvasive and accurate method to distinguish benign and malignant ovarian lesion prior to treatment. This study developed a deep learning algorithm that ...distinguishes benign from malignant ovarian lesion by applying a convolutional neural network on routine MR imaging.
Methods
Five hundred forty-five lesions (379 benign and 166 malignant) from 451 patients from a single institution were divided into training, validation, and testing set in a 7:2:1 ratio. Model performance was compared with four junior and three senior radiologists on the test set.
Results
Compared with junior radiologists averaged, the final ensemble model combining MR imaging and clinical variables had a higher test accuracy (0.87 vs 0.64,
p
< 0.001) and specificity (0.92 vs 0.64,
p
< 0.001) with comparable sensitivity (0.75 vs 0.63,
p
= 0.407). Against the senior radiologists averaged, the final ensemble model also had a higher test accuracy (0.87 vs 0.74,
p
= 0.033) and specificity (0.92 vs 0.70,
p
< 0.001) with comparable sensitivity (0.75 vs 0.83,
p
= 0.557). Assisted by the model’s probabilities, the junior radiologists achieved a higher average test accuracy (0.77 vs 0.64, Δ = 0.13,
p
< 0.001) and specificity (0.81 vs 0.64, Δ = 0.17,
p
< 0.001) with unchanged sensitivity (0.69 vs 0.63, Δ = 0.06,
p
= 0.302). With the AI probabilities, the junior radiologists had higher specificity (0.81 vs 0.70, Δ = 0.11,
p
= 0.005) but similar accuracy (0.77 vs 0.74, Δ = 0.03,
p
= 0.409) and sensitivity (0.69 vs 0.83, Δ = -0.146,
p
= 0.097) when compared with the senior radiologists.
Conclusions
These results demonstrate that artificial intelligence based on deep learning can assist radiologists in assessing the nature of ovarian lesions and improve their performance.
Key Points
• Artificial Intelligence based on deep learning can assess the nature of ovarian lesions on routine MRI with higher accuracy and specificity than radiologists.
• Assisted by the deep learning model’s probabilities, junior radiologists achieved better performance that matched those of senior radiologists.
Objectives
We aimed to develop deep learning models using longitudinal chest X-rays (CXRs) and clinical data to predict in-hospital mortality of COVID-19 patients in the intensive care unit (ICU).
...Methods
Six hundred fifty-four patients (212 deceased, 442 alive, 5645 total CXRs) were identified across two institutions. Imaging and clinical data from one institution were used to train five longitudinal transformer-based networks applying five-fold cross-validation. The models were tested on data from the other institution, and pairwise comparisons were used to determine the best-performing models.
Results
A higher proportion of deceased patients had elevated white blood cell count, decreased absolute lymphocyte count, elevated creatine concentration, and incidence of cardiovascular and chronic kidney disease. A model based on pre-ICU CXRs achieved an AUC of 0.632 and an accuracy of 0.593, and a model based on ICU CXRs achieved an AUC of 0.697 and an accuracy of 0.657. A model based on all longitudinal CXRs (both pre-ICU and ICU) achieved an AUC of 0.702 and an accuracy of 0.694. A model based on clinical data alone achieved an AUC of 0.653 and an accuracy of 0.657. The addition of longitudinal imaging to clinical data in a combined model significantly improved performance, reaching an AUC of 0.727 (
p
= 0.039) and an accuracy of 0.732.
Conclusions
The addition of longitudinal CXRs to clinical data significantly improves mortality prediction with deep learning for COVID-19 patients in the ICU.
Key Points
•
Deep learning was used to predict mortality in COVID-19 ICU patients.
•
Serial radiographs and clinical data were used.
•
The models could inform clinical decision-making and resource allocation.