Purpose
Accurate segmentation of organs‐at‐risks (OARs) is the key step for efficient planning of radiation therapy for head and neck (HaN) cancer treatment. In the work, we proposed the first deep ...learning‐based algorithm, for segmentation of OARs in HaN CT images, and compared its performance against state‐of‐the‐art automated segmentation algorithms, commercial software, and interobserver variability.
Methods
Convolutional neural networks (CNNs)—a concept from the field of deep learning—were used to study consistent intensity patterns of OARs from training CT images and to segment the OAR in a previously unseen test CT image. For CNN training, we extracted a representative number of positive intensity patches around voxels that belong to the OAR of interest in training CT images, and negative intensity patches around voxels that belong to the surrounding structures. These patches then passed through a sequence of CNN layers that captured local image features such as corners, end‐points, and edges, and combined them into more complex high‐order features that can efficiently describe the OAR. The trained network was applied to classify voxels in a region of interest in the test image where the corresponding OAR is expected to be located. We then smoothed the obtained classification results by using Markov random fields algorithm. We finally extracted the largest connected component of the smoothed voxels classified as the OAR by CNN, performed dilate–erode operations to remove cavities of the component, which resulted in segmentation of the OAR in the test image.
Results
The performance of CNNs was validated on segmentation of spinal cord, mandible, parotid glands, submandibular glands, larynx, pharynx, eye globes, optic nerves, and optic chiasm using 50 CT images. The obtained segmentation results varied from 37.4% Dice coefficient (DSC) for chiasm to 89.5% DSC for mandible. We also analyzed the performance of state‐of‐the‐art algorithms and commercial software reported in the literature, and observed that CNNs demonstrate similar or superior performance on segmentation of spinal cord, mandible, parotid glands, larynx, pharynx, eye globes, and optic nerves, but inferior performance on segmentation of submandibular glands and optic chiasm.
Conclusion
We concluded that convolution neural networks can accurately segment most of OARs using a representative database of 50 HaN CT images. At the same time, inclusion of additional information, for example, MR images, may be beneficial to some OARs with poorly visible boundaries.
Background
Accurate prediction of radiation toxicity of healthy organs‐at‐risks (OARs) critically determines the radiation therapy (RT) success. The existing dose–volume histogram‐based metric may ...grossly under/overestimate the therapeutic toxicity after 27% in liver RT and 50% in head‐and‐neck RT. We propose the novel paradigm for toxicity prediction by leveraging the enormous potential of deep learning and go beyond the existing dose/volume histograms.
Experimental Design
We employed a database of 125 liver stereotactic body RT (SBRT) cases with follow‐up data to train deep learning‐based toxicity predictor. Convolutional neural networks (CNNs) were applied to discover the consistent patterns in 3D dose plans associated with toxicities. To enhance the predicting power, we first pretrain the CNNs with transfer learning from 3D CT images of 2644 human organs. CNNs were then trained on liver SBRT cases. Furthermore, nondosimetric pretreatment features, such as patients’ demographics, underlying liver diseases, liver‐directed therapies, were inputted into the fully connected neural network for more comprehensive prediction. The saliency maps of CNNs were used to estimate the toxicity risks associated with irradiation of anatomical regions of specific OARs. In addition, we applied machine learning solutions to map numerical pretreatment features with hepatobiliary toxicity manifestation.
Results
Among 125 liver SBRT patients, 58 were treated for liver metastases, 36 for hepatocellular carcinoma, 27 for cholangiocarcinoma, and 4 for other histologies. We observed that CNN we able to achieve accurate hepatobiliary toxicity prediction with the AUC of 0.79, whereas combining CNN for 3D dose plan analysis and fully connected neural networks for numerical feature analysis resulted in AUC of 0.85. Deep learning produces almost two times fewer false‐positive toxicity predictions in comparison to DVH‐based predictions, when the number of false negatives, i.e., missed toxicities, was minimized. The CNN saliency maps automatically estimated the toxicity risks for portal vein (PV) regions. We discovered that irradiation of the proximal portal vein is associated with two times higher toxicity risks (risk score: 0.66) that irradiation of the left portal vein (risk score: 0.31).
Conclusions
The framework offers clinically accurate tools for hepatobiliary toxicity prediction and automatic identification of anatomical regions that are critical to spare during SBRT.
Super resolution reconstruction can be used to recover a high resolution image from a low resolution image and is particularly beneficial for clinically significant medical images in diagnosis, ...treatment, and research applications. However, super resolution is a challenging inverse problem due to its ill-posed nature. In this paper, inspired by recent developments in deep learning, a super resolution algorithm (SR-DCNN) is proposed for medical images that is based on a neural network and employs a deconvolution operation. The purpose of the deconvolution is to effectively establish an end-to-end mapping between the low and high resolution images. First, training data consisting of 1500 medical images of the lung, brain, heart, and spine, was collected, down-sampled, and input into the neural network. Then, patch-based image features were extracted using a set of filters and the parametric rectified linear unit (PReLU) was subsequently applied as the activation function. Finally, these extracted image features were used to reconstruct high resolution images by minimizing the loss between the predicted output image and the original high resolution image. Various network structures and hyper parameter settings were explored to achieve a good trade-off between performance and computational efficiency, based on which a four-layer network was found to achieve the best result in terms of the peak signal-to-noise ratio (PSNR), structural similarity measure (SSIM), information entropy (IE), and execution speed. The network was then validated on test data, and it was demonstrated that the proposed SR-DCNN algorithm quantitatively and qualitatively outperformed the current state-of-the-art methods.
Automated and semi-automated detection and segmentation of spinal and vertebral structures from computed tomography (CT) images is a challenging task due to a relatively high degree of anatomical ...complexity, presence of unclear boundaries and articulation of vertebrae with each other, as well as due to insufficient image spatial resolution, partial volume effects, presence of image artifacts, intensity variations and low signal-to-noise ratio. In this paper, we describe a novel framework for automated spine and vertebrae detection and segmentation from 3-D CT images. A novel optimization technique based on interpolation theory is applied to detect the location of the whole spine in the 3-D image and, using the obtained location of the whole spine, to further detect the location of individual vertebrae within the spinal column. The obtained vertebra detection results represent a robust and accurate initialization for the subsequent segmentation of individual vertebrae, which is performed by an improved shape-constrained deformable model approach. The framework was evaluated on two publicly available CT spine image databases of 50 lumbar and 170 thoracolumbar vertebrae. Quantitative comparison against corresponding reference vertebra segmentations yielded an overall mean centroid-to-centroid distance of 1.1 mm and Dice coefficient of 83.6% for vertebra detection, and an overall mean symmetric surface distance of 0.3 mm and Dice coefficient of 94.6% for vertebra segmentation. The results indicate that by applying the proposed automated detection and segmentation framework, vertebrae can be successfully detected and accurately segmented in 3-D from CT spine images.
Cephalometric analysis is an essential clinical and research tool in orthodontics for the orthodontic analysis and treatment planning. This paper presents the evaluation of the methods submitted to ...the Automatic Cephalometric X-Ray Landmark Detection Challenge, held at the IEEE International Symposium on Biomedical Imaging 2014 with an on-site competition. The challenge was set to explore and compare automatic landmark detection methods in application to cephalometric X-ray images. Methods were evaluated on a common database including cephalograms of 300 patients aged six to 60 years, collected from the Dental Department, Tri-Service General Hospital, Taiwan, and manually marked anatomical landmarks as the ground truth data, generated by two experienced medical doctors. Quantitative evaluation was performed to compare the results of a representative selection of current methods submitted to the challenge. Experimental results show that three methods are able to achieve detection rates greater than 80% using the 4 mm precision range, but only one method achieves a detection rate greater than 70% using the 2 mm precision range, which is the acceptable precision range in clinical practice. The study provides insights into the performance of different landmark detection approaches under real-world conditions and highlights achievements and limitations of current image analysis techniques.
•We organized two challenges for landmark detection, pathology classification and teeth segmentation in dental x-ray image analysis.•Datasets include 400 cephalometric images and 120 bitewing images ...with a referenced standard generated by medical experts.•The datasets and the evaluation software will be made available to the research community, further encouraging future developments in this field.
Display omitted
Dental radiography plays an important role in clinical diagnosis, treatment and surgery. In recent years, efforts have been made on developing computerized dental X-ray image analysis systems for clinical usages. A novel framework for objective evaluation of automatic dental radiography analysis algorithms has been established under the auspices of the IEEE International Symposium on Biomedical Imaging 2015 Bitewing Radiography Caries Detection Challenge and Cephalometric X-ray Image Analysis Challenge. In this article, we present the datasets, methods and results of the challenge and lay down the principles for future uses of this benchmark. The main contributions of the challenge include the creation of the dental anatomy data repository of bitewing radiographs, the creation of the anatomical abnormality classification data repository of cephalometric radiographs, and the definition of objective quantitative evaluation for comparison and ranking of the algorithms. With this benchmark, seven automatic methods for analysing cephalometric X-ray image and two automatic methods for detecting bitewing radiography caries have been compared, and detailed quantitative evaluation results are presented in this paper. Based on the quantitative evaluation results, we believe automatic dental radiography analysis is still a challenging and unsolved problem. The datasets and the evaluation software will be made available to the research community, further encouraging future developments in this field. (http://www-o.ntust.edu.tw/~cweiwang/ISBI2015/)
Quantitative cephalometry plays an essential role in clinical diagnosis, treatment, and surgery. Development of fully automated techniques for these procedures is important to enable consistently ...accurate computerized analyses. We study the application of deep convolutional neural networks (CNNs) for fully automated quantitative cephalometry for the first time. The proposed framework utilizes CNNs for detection of landmarks that describe the anatomy of the depicted patient and yield quantitative estimation of pathologies in the jaws and skull base regions. We use a publicly available cephalometric x-ray image dataset to train CNNs for recognition of landmark appearance patterns. CNNs are trained to output probabilistic estimations of different landmark locations, which are combined using a shape-based model. We evaluate the overall framework on the test set and compare with other proposed techniques. We use the estimated landmark locations to assess anatomically relevant measurements and classify them into different anatomical types. Overall, our results demonstrate high anatomical landmark detection accuracy (∼1% to 2% higher success detection rate for a 2-mm range compared with the top benchmarks in the literature) and high anatomical type classification accuracy (∼76% average classification accuracy for test set). We demonstrate that CNNs, which merely input raw image patches, are promising for accurate quantitative cephalometry.
In 2020, an experiment testing AI solutions for lung X-ray analysis on a multi-hospital network was conducted. The multi-hospital network linked 178 Moscow state healthcare centers, where all chest ...X-rays from the network were redirected to a research facility, analyzed with AI, and returned to the centers. The experiment was formulated as a public competition with monetary awards for participating industrial and research teams. The task was to perform the binary detection of abnormalities from chest X-rays. For the objective real-life evaluation, no training X-rays were provided to the participants. This paper presents one of the top-performing AI frameworks from this experiment. First, the framework used two EfficientNets, histograms of gradients, Haar feature ensembles, and local binary patterns to recognize whether an input image represents an acceptable lung X-ray sample, meaning the X-ray is not grayscale inverted, is a frontal chest X-ray, and completely captures both lung fields. Second, the framework extracted the region with lung fields and then passed them to a multi-head DenseNet, where the heads recognized the patient's gender, age and the potential presence of abnormalities, and generated the heatmap with the abnormality regions highlighted. During one month of the experiment from 11.23.2020 to 12.25.2020, 17,888 cases have been analyzed by the framework with 11,902 cases having radiological reports with the reference diagnoses that were unequivocally parsed by the experiment organizers. The performance measured in terms of the area under receiving operator curve (AUC) was 0.77. The AUC for individual diseases ranged from 0.55 for herniation to 0.90 for pneumothorax.
Percutaneous nephrolithotomy (PCNL) is the current standard of care for patients with a total renal stone burden <inline-formula> <tex-math notation="LaTeX">> </tex-math></inline-formula> 20 mm. ...Gaining access to the kidney is a crucial step, as the position of the percutaneous tract can affect the ability to manipulate a nephroscope during the procedure. However, gaining percutaneous access using fluoroscopic guidance has a challenging learning curve, with only a minority of urologists can successfully establish the access. In addition to difficult access, the PCNL carries a risk of bleeding and the need for blood transfusion. Robotic assistance may be a key towards accurate and reliable access. Beyond assisting with renal access, a robotic platform can record data of importance related to the user's activities via sensor-equipped instruments. The analysis of these activities is crucial for understanding what constitutes a successful and safe procedure. In this paper, we harness the powers of machine learning to automatically analyze physicians' activities during robotic-assisted renal access using the Monarch Ⓡ Platform, Urology. A machine learning framework based on a combination of a 1-dimensional U-net and random forests was developed to find consistent patterns in the sensor data characteristic of needle insertions. This framework retrospectively analyzed data previously obtained from 248 percutaneous renal access procedures. These procedures were performed on 18 human cadaveric models by 17 practicing urologists and one urologist proxy. The framework automatically recognized 94% of all first needle insertions in each procedure and labeled them with an accuracy of 0.81 in terms of the Dice coefficient. The recognition accuracy for secondary insertions was 66%. The automatically detected needle insertions were used to calculate clinical metrics such as tract length, anterior-posterior and cranial-caudal angles of the insertion site, as well as user skills such as trajectory deviation and targeting accuracy.
Patients with severe COVID-19 have overwhelmed healthcare systems worldwide. We hypothesized that machine learning (ML) models could be used to predict risks at different stages of management and ...thereby provide insights into drivers and prognostic markers of disease progression and death. From a cohort of approx. 2.6 million citizens in Denmark, SARS-CoV-2 PCR tests were performed on subjects suspected for COVID-19 disease; 3944 cases had at least one positive test and were subjected to further analysis. SARS-CoV-2 positive cases from the United Kingdom Biobank was used for external validation. The ML models predicted the risk of death (Receiver Operation Characteristics-Area Under the Curve, ROC-AUC) of 0.906 at diagnosis, 0.818, at hospital admission and 0.721 at Intensive Care Unit (ICU) admission. Similar metrics were achieved for predicted risks of hospital and ICU admission and use of mechanical ventilation. Common risk factors, included age, body mass index and hypertension, although the top risk features shifted towards markers of shock and organ dysfunction in ICU patients. The external validation indicated fair predictive performance for mortality prediction, but suboptimal performance for predicting ICU admission. ML may be used to identify drivers of progression to more severe disease and for prognostication patients in patients with COVID-19. We provide access to an online risk calculator based on these findings.