Purpose:
Automated detection of solitary pulmonary nodules using positron emission tomography (PET) and computed tomography (CT) images shows good sensitivity; however, it is difficult to detect ...nodules in contact with normal organs, and additional efforts are needed so that the number of false positives (FPs) can be further reduced. In this paper, the authors propose an improved FP-reduction method for the detection of pulmonary nodules in PET/CT images by means of convolutional neural networks (CNNs).
Methods:
The overall scheme detects pulmonary nodules using both CT and PET images. In the CT images, a massive region is first detected using an active contour filter, which is a type of contrast enhancement filter that has a deformable kernel shape. Subsequently, high-uptake regions detected by the PET images are merged with the regions detected by the CT images. FP candidates are eliminated using an ensemble method; it consists of two feature extractions, one by shape/metabolic feature analysis and the other by a CNN, followed by a two-step classifier, one step being rule based and the other being based on support vector machines.
Results:
The authors evaluated the detection performance using 104 PET/CT images collected by a cancer-screening program. The sensitivity in detecting candidates at an initial stage was 97.2%, with 72.8 FPs/case. After performing the proposed FP-reduction method, the sensitivity of detection was 90.1%, with 4.9 FPs/case; the proposed method eliminated approximately half the FPs existing in the previous study.
Conclusions:
An improved FP-reduction scheme using CNN technique has been developed for the detection of pulmonary nodules in PET/CT images. The authors’ ensemble FP-reduction method eliminated 93% of the FPs; their proposed method using CNN technique eliminates approximately half the FPs existing in the previous study. These results indicate that their method may be useful in the computer-aided detection of pulmonary nodules using PET/CT images.
Purpose
We propose a single network trained by pixel‐to‐label deep learning to address the general issue of automatic multiple organ segmentation in three‐dimensional (3D) computed tomography (CT) ...images. Our method can be described as a voxel‐wise multiple‐class classification scheme for automatically assigning labels to each pixel/voxel in a 2D/3D CT image.
Methods
We simplify the segmentation algorithms of anatomical structures (including multiple organs) in a CT image (generally in 3D) to a majority voting scheme over the semantic segmentation of multiple 2D slices drawn from different viewpoints with redundancy. The proposed method inherits the spirit of fully convolutional networks (FCNs) that consist of “convolution” and “deconvolution” layers for 2D semantic image segmentation, and expands the core structure with 3D‐2D‐3D transformations to adapt to 3D CT image segmentation. All parameters in the proposed network are trained pixel‐to‐label from a small number of CT cases with human annotations as the ground truth. The proposed network naturally fulfills the requirements of multiple organ segmentations in CT cases of different sizes that cover arbitrary scan regions without any adjustment.
Results
The proposed network was trained and validated using the simultaneous segmentation of 19 anatomical structures in the human torso, including 17 major organs and two special regions (lumen and content inside of stomach). Some of these structures have never been reported in previous research on CT segmentation. A database consisting of 240 (95% for training and 5% for testing) 3D CT scans, together with their manually annotated ground‐truth segmentations, was used in our experiments. The results show that the 19 structures of interest were segmented with acceptable accuracy (88.1% and 87.9% voxels in the training and testing datasets, respectively, were labeled correctly) against the ground truth.
Conclusions
We propose a single network based on pixel‐to‐label deep learning to address the challenging issue of anatomical structure segmentation in 3D CT cases. The novelty of this work is the policy of deep learning of the different 2D sectional appearances of 3D anatomical structures for CT cases and the majority voting of the 3D segmentation results from multiple crossed 2D sections to achieve availability and reliability with better efficiency, generality, and flexibility than conventional segmentation methods, which must be guided by human expertise.
Lung cancer is a leading cause of death worldwide. Currently, in differential diagnosis of lung cancer, accurate classification of cancer types (adenocarcinoma, squamous cell carcinoma, and small ...cell carcinoma) is required. However, improving the accuracy and stability of diagnosis is challenging. In this study, we developed an automated classification scheme for lung cancers presented in microscopic images using a deep convolutional neural network (DCNN), which is a major deep learning technique. The DCNN used for classification consists of three convolutional layers, three pooling layers, and two fully connected layers. In evaluation experiments conducted, the DCNN was trained using our original database with a graphics processing unit. Microscopic images were first cropped and resampled to obtain images with resolution of 256 × 256 pixels and, to prevent overfitting, collected images were augmented via rotation, flipping, and filtering. The probabilities of three types of cancers were estimated using the developed scheme and its classification accuracy was evaluated using threefold cross validation. In the results obtained, approximately 71% of the images were classified correctly, which is on par with the accuracy of cytotechnologists and pathologists. Thus, the developed scheme is useful for classification of lung cancers from microscopic images.
Abstract Dental records play an important role in forensic identification. To this end, postmortem dental findings and teeth conditions are recorded in a dental chart and compared with those of ...antemortem records. However, most dentists are inexperienced at recording the dental chart for corpses, and it is a physically and mentally laborious task, especially in large scale disasters. Our goal is to automate the dental filing process by using dental x-ray images. In this study, we investigated the application of a deep convolutional neural network (DCNN) for classifying tooth types on dental cone-beam computed tomography (CT) images. Regions of interest (ROIs) including single teeth were extracted from CT slices. Fifty two CT volumes were randomly divided into 42 training and 10 test cases, and the ROIs obtained from the training cases were used for training the DCNN. For examining the sampling effect, random sampling was performed 3 times, and training and testing were repeated. We used the AlexNet network architecture provided in the Caffe framework, which consists of 5 convolution layers, 3 pooling layers, and 2 full connection layers. For reducing the overtraining effect, we augmented the data by image rotation and intensity transformation. The test ROIs were classified into 7 tooth types by the trained network. The average classification accuracy using the augmented training data by image rotation and intensity transformation was 88.8%. Compared with the result without data augmentation, data augmentation resulted in an approximately 5% improvement in classification accuracy. This indicates that the further improvement can be expected by expanding the CT dataset. Unlike the conventional methods, the proposed method is advantageous in obtaining high classification accuracy without the need for precise tooth segmentation. The proposed tooth classification method can be useful in automatic filing of dental charts for forensic identification.
Objectives
The aim of this study was to evaluate the use of a convolutional neural network (CNN) system for detecting vertical root fracture (VRF) on panoramic radiography.
Methods
Three hundred ...panoramic images containing a total of 330 VRF teeth with clearly visible fracture lines were selected from our hospital imaging database. Confirmation of VRF lines was performed by two radiologists and one endodontist. Eighty percent (240 images) of the 300 images were assigned to a training set and 20% (60 images) to a test set. A CNN-based deep learning model for the detection of VRFs was built using DetectNet with DIGITS version 5.0. To defend test data selection bias and increase reliability, fivefold cross-validation was performed. Diagnostic performance was evaluated using recall, precision, and
F
measure.
Results
Of the 330 VRFs, 267 were detected. Twenty teeth without fractures were falsely detected. Recall was 0.75, precision 0.93, and
F
measure 0.83.
Conclusions
The CNN learning model has shown promise as a tool to detect VRFs on panoramic images and to function as a CAD tool.
Objectives
To apply a deep-learning system for diagnosis of maxillary sinusitis on panoramic radiography, and to clarify its diagnostic performance.
Methods
Training data for 400 healthy and 400 ...inflamed maxillary sinuses were enhanced to 6000 samples in each category by data augmentation. Image patches were input into a deep-learning system, the learning process was repeated for 200 epochs, and a learning model was created. Newly-prepared testing image patches from 60 healthy and 60 inflamed sinuses were input into the learning model, and the diagnostic performance was calculated. Receiver-operating characteristic (ROC) curves were drawn, and the area under the curve (AUC) values were obtained. The results were compared with those of two experienced radiologists and two dental residents.
Results
The diagnostic performance of the deep-learning system for maxillary sinusitis on panoramic radiographs was high, with accuracy of 87.5%, sensitivity of 86.7%, specificity of 88.3%, and AUC of 0.875. These values showed no significant differences compared with those of the radiologists and were higher than those of the dental residents.
Conclusions
The diagnostic performance of the deep-learning system for maxillary sinusitis on panoramic radiographs was sufficiently high. Results from the deep-learning system are expected to provide diagnostic support for inexperienced dentists.
Cytology is the first pathological examination performed in the diagnosis of lung cancer. In our previous study, we introduced a deep convolutional neural network (DCNN) to automatically classify ...cytological images as images with benign or malignant features and achieved an accuracy of 81.0%. To further improve the DCNN's performance, it is necessary to train the network using more images. However, it is difficult to acquire cell images which contain a various cytological features with the use of many manual operations with a microscope. Therefore, in this study, we aim to improve the classification accuracy of a DCNN with the use of actual and synthesized cytological images with a generative adversarial network (GAN). Based on the proposed method, patch images were obtained from a microscopy image. Accordingly, these generated many additional similar images using a GAN. In this study, we introduce progressive growing of GANs (PGGAN), which enables the generation of high-resolution images. The use of these images allowed us to pretrain a DCNN. The DCNN was then fine-tuned using actual patch images. To confirm the effectiveness of the proposed method, we first evaluated the quality of the images which were generated by PGGAN and by a conventional deep convolutional GAN. We then evaluated the classification performance of benign and malignant cells, and confirmed that the generated images had characteristics similar to those of the actual images. Accordingly, we determined that the overall classification accuracy of lung cells was 85.3% which was improved by approximately 4.3% compared to a previously conducted study without pretraining using GAN-generated images. Based on these results, we confirmed that our proposed method will be effective for the classification of cytological images in cases at which only limited data are acquired.
The distal root of the mandibular first molar occasionally has an extra root, which can directly affect the outcome of endodontic therapy. In this study, we examined the diagnostic performance of a ...deep learning system for classification of the root morphology of mandibular first molars on panoramic radiographs. Dental cone-beam CT (CBCT) was used as the gold standard.
CBCT images and panoramic radiographs of 760 mandibular first molars from 400 patients who had not undergone root canal treatments were analyzed. Distal roots were examined on CBCT images to determine the presence of a single or extra root. Image patches of the roots were segmented from panoramic radiographs and applied to a deep learning system, and its diagnostic performance in the classification of root morphplogy was examined.
Extra roots were observed in 21.4% of distal roots on CBCT images. The deep learning system had diagnostic accuracy of 86.9% for the determination of whether distal roots were single or had extra roots.
The deep learning system showed high accuracy in the differential diagnosis of a single or extra root in the distal roots of mandibular first molars.
Abstract
Artificial intelligence (AI) applications in medical imaging continue facing the difficulty in collecting and using large datasets. One method proposed for solving this problem is data ...augmentation using fictitious images generated by generative adversarial networks (GANs). However, applying a GAN as a data augmentation technique has not been explored, owing to the quality and diversity of the generated images. To promote such applications by generating diverse images, this study aims to generate free-form lesion images from tumor sketches using a pix2pix-based model, which is an image-to-image translation model derived from GAN. As pix2pix, which assumes one-to-one image generation, is unsuitable for data augmentation, we propose StylePix2pix, which is independently improved to allow one-to-many image generation. The proposed model introduces a mapping network and style blocks from StyleGAN. Image generation results based on 20 tumor sketches created by a physician demonstrated that the proposed method can reproduce tumors with complex shapes. Additionally, the one-to-many image generation of StylePix2pix suggests effectiveness in data-augmentation applications.
Abstract Textural features can be useful in differentiating between benign and malignant breast lesions on mammograms. Unlike previous computerized schemes, which relied largely on shape and margin ...features based on manual contours of masses, textural features can be determined from regions of interest (ROIs) without precise lesion segmentation. In this study, therefore, we investigated an ROI-based feature, namely, radial local ternary patterns (RLTP), which takes into account the direction of edge patterns with respect to the center of masses for classification of ROIs for benign and malignant masses. Using an artificial neural network (ANN), support vector machine (SVM) and random forest (RF) classifiers, the classification abilities of RLTP were compared with those of the regular local ternary patterns (LTP), rotation invariant uniform (RIU2) LTP, texture features based on the gray level co-occurrence matrix (GLCM), and wavelet features. The performance was evaluated with 376 ROIs including 181 malignant and 195 benign masses. The highest areas under the receiver operating characteristic curves among three classifiers were 0.90, 0.77, 0.78, 0.86, and 0.83 for RLTP, LTP, RIU2-LTP, GLCM, and wavelet features, respectively. The results indicate the usefulness of the proposed texture features for distinguishing between benign and malignant lesions and the superiority of the radial patterns compared with the conventional rotation invariant patterns.