In the detection of COVID-19, chest X-ray (CXR) images and CT scan images are two main technical methods, which provide an important basis for doctors' diagnosis. Currently, convolutional neural ...network (CNN) in detecting the COVID-19 medical radioactive images has problems of low accuracy, complex algorithms, and inability to mark feature regions. In order to solve these problems, this paper proposes an algorithm combining Grad-CAM color visualization and convolutional neural network (GCCV-CNN). The algorithm can quickly classify lung X-ray images and CT scan images of COVID-19-positive patients, COVID-19-negative patients, general pneumonia patients and healthy people. At the same time, it can quickly locate the critical area in X-ray images and CT images. Finally, the algorithm can get more accurate detection results through the synthesis of deep learning algorithms. In order to verify the effectiveness of the GCCV-CNN algorithm, experiments are performed on three COVID-19-positive patient datasets and it
This study presents the investigation of optical emission spectroscopy of plasma using interpretable convolutional neural network (CNN) for real-time volatile organic compounds (VOCs) classification. ...A microplasma-generation platform was developed to efficiently collect 64 k spectra from various types of VOCs at different concentrations, as training and testing sets for machine learning. A CNN model was trained to classify VOCs with accuracy of 99.9%. To interpret the CNN model and its predictions, the spectral processing mechanism of the CNN was visualized by feature maps and the critical spectral features were identified by gradient-weighted class activation mapping. Such approaches brought insights on how CNN analyzes the spectra and enables the CNN operation to be explainable. Finally, the CNN model was incorporated with the microplasma platform to demonstrate the application of real-time VOC monitoring. The type of VOCs can be identified and reported via messages within 10 s once the microplasma is ignited. We believe that using CNN brings a novel route for plasma spectroscopy analysis for VOC classification and impacts the fields of plasma, spectroscopy, and environmental monitoring.
Display omitted
•CNN was used to analyze plasma spectroscopy for VOC identification.•The CNN model was used to classify the VOCs with accuracy >99.8%.•Grad-CAM was used to interpret the CNN predictions.•Real-time and online monitoring of VOCs was performed with instant warning message.
•Classification framework to distinguish between COVID-19, normal and pneumonia CXR images•Handpicked features in conjunction with those obtained via transfer learning using ResNet50•10-fold ...cross-validation accuracy of 0.974 ± 0.02 at 95% confidence interval•Gradient-based localizations captured using Grad-CAM serve as clinical evidence•Efficacy of the proposed work, ascertained by validation on an independent cohort
Coronaviruses are a family of viruses that majorly cause respiratory disorders in humans. Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) is a new strain of coronavirus that causes the coronavirus disease 2019 (COVID-19). WHO has identified COVID-19 as a pandemic as it has spread across the globe due to its highly contagious nature. For early diagnosis of COVID-19, the reverse transcription-polymerase chain reaction (RT-PCR) test is commonly done. However, it suffers from a high false-negative rate of up to 67% if the test is done during the first five days of exposure. As an alternative, research on the efficacy of deep learning techniques employed in the identification of COVID-19 disease using chest X-ray images is intensely pursued.
As pneumonia and COVID-19 exhibit similar/ overlapping symptoms and affect the human lungs, a distinction between the chest X-ray images of pneumonia patients and COVID-19 patients becomes challenging. In this work, we have modeled the COVID-19 classification problem as a multiclass classification problem involving three classes, namely COVID-19, pneumonia, and normal. We have proposed a novel classification framework which combines a set of handpicked features with those obtained from a deep convolutional neural network. The proposed framework comprises of three modules. In the first module, we exploit the strength of transfer learning using ResNet-50 for training the network on a set of preprocessed images and obtain a vector of 2048 features. In the second module, we construct a pool of frequency and texture based 252 handpicked features that are further reduced to a set of 64 features using PCA. Subsequently, these are passed to a feed forward neural network to obtain a set of 16 features. The third module concatenates the features obtained from first and second modules, and passes them to a dense layer followed by the softmax layer to yield the desired classification model. We have used chest X-ray images of COVID-19 patients from four independent publicly available repositories, in addition to images from the Mendeley and Kaggle Chest X-Ray Datasets for pneumonia and normal cases.
To establish the efficacy of the proposed model, 10-fold cross-validation is carried out. The model generated an overall classification accuracy of 0.974 ± 0.02 and a sensitivity of 0.987 ± 0.05, 0.963 ± 0.05, and 0.973 ± 0.04 at 95% confidence interval for COVID-19, normal, and pneumonia classes, respectively. To ensure the effectiveness of the proposed model, it was validated using an independent Chest X-ray cohort and an overall classification accuracy of 0.979 was achieved. Comparison of the proposed framework with state-of-the-art methods reveal that the proposed framework outperforms others in terms of accuracy and sensitivity. Since interpretability of results is crucial in the medical domain, the gradient-based localizations are captured using Gradient-weighted Class Activation Mapping (Grad-CAM). In summary, the results obtained are stable over independent cohorts and interpretable using Grad-CAM localizations that serve as clinical evidence.
•A custom deep learning framework for detecting fire in real-world images.•Attention mechanism and transfer learning is used with EfficientNetB0 trained.•Framework uses Grad-CAM method for ...visualization and localization of fire.•High recall of 97.61 supports the reliability of model for fire detection task.
Fire is a severe natural calamity that causes significant harm to human lives and the environment. Recent works have proposed the use of computer vision for developing a cost-effective automated fire detection system. This paper presents a custom framework for detecting fire using transfer learning with state-of-the-art CNNs trained over real-world fire breakout images. The framework also uses the Grad-CAM method for the visualization and localization of fire in the images. The model also uses an attention mechanism that has significantly assisted the network in achieving better performances. It was observed through Grad-CAM results that the proposed use of attention led the model towards better localization of fire in the images. Among the plethora of models explored, the EfficientNetB0 emerged as the best-suited network choice for the problem. For the selected real-world fire image dataset, a test accuracy of 95.40% strongly supports the model's efficiency in detecting fire from the presented image samples. Also, a very high recall of 97.61 highlights that the model has negligible false negatives, suggesting the network to be reliable for fire detection.
Prognostication for comatose patients after cardiac arrest is a difficult but essential task. Currently, visual interpretation of electroencephalogram (EEG) is one of the main modality used in ...outcome prediction. There is a growing interest in computer‐assisted EEG interpretation, either to overcome the possible subjectivity of visual interpretation, or to identify complex features of the EEG signal. We used a one‐dimensional convolutional neural network (CNN) to predict functional outcome based on 19‐channel‐EEG recorded from 267 adult comatose patients during targeted temperature management after CA. The area under the receiver operating characteristic curve (AUC) on the test set was 0.885. Interestingly, model architecture and fine‐tuning only played a marginal role in classification performance. We then used gradient‐weighted class activation mapping (Grad‐CAM) as visualization technique to identify which EEG features were used by the network to classify an EEG epoch as favorable or unfavorable outcome, and also to understand failures of the network. Grad‐CAM showed that the network relied on similar features than classical visual analysis for predicting unfavorable outcome (suppressed background, epileptiform transients). This study confirms that CNNs are promising models for EEG‐based prognostication in comatose patients, and that Grad‐CAM can provide explanation for the models' decision‐making, which is of utmost importance for future use of deep learning models in a clinical setting.
We aim to present two algorithms for the non-stationary Stokes/Darcy model. The first one is the standard grad–div stabilization scheme. The other one is a modular grad–div based on the standard ...Backward Euler code which does not crash or slow down for large value grad–div parameters. Both algorithms cannot only improve the efficiency and accuracy of calculation but also can improve mass conservation, while the modular algorithm can be better. We give a complete theoretical analysis of the stability and error estimations of the algorithms. Finally, the theoretical results are verified by numerical experiments and the advantages of adding grad–div stabilization terms are demonstrated.
•A transfer-learning-based depth reduction approach for CNN models (TLDR-CNN) approach is proposed, aiming to improve the classification performance while significantly reducing the number of layers ...in the CNN model.•A multi-scale feature extraction module (MSFE module) consisting of five parallel branches is proposed, aiming to extract spectral features of different scales and improve the model’s generalization ability.•The application of Grad-CAM (Gradient-weighted Class Activation Mapping) in the CNN model for fault classification.•The dataset utilizes offline augmentation to overcome the imbalanced distribution of faults and enhance the generalization capability of the CNN model.
In the operation of photovoltaic (PV) power plants, infrared cameras are commonly utilized for monitoring the operational status of PV modules. This study focuses on the performance improvement and complexity reduction of convolutional neural network (CNN) when used for fault classification based on infrared images of PV module. By implementing the transfer learning strategy on some famous CNN models, it is observed that the number of convolutional layers has weak impact on the classification results. Therefore, a transfer-learning-based depth reduction approach for CNN models (TLDR-CNN approach) is proposed, and the VGG16 model is employed for verification. Then, a multi-scale feature extraction module (MSFE module) is developed for efficiently replacing the convolutional layers to reduce model complexity and improve classification performance, and several representative model configurations are employed for convolutional layer replacement. Experimental results demonstrate that the application of the developed MSFE module significantly outperforms the baseline model on both classification performance and model complexity. Specifically, the modified model with a reduction of 5 convolutional layers exhibits notable improvements over the training results, with an accuracy increase of 0.90%, precision increase of 0.98%, F1 score increase of 6.89%, and a Matthews correlation coefficient increase of 1.01%. Finally, the interpretability of the above outperformance is also provided by using the Grad-CAM method. The generated CAM images show that the modified model concentrates its weights more on the regions crucial for the model to learn, so the features can be extracted more efficiently.
The coronavirus has caused havoc on billions of people worldwide. The Reverse Transcription Polymerase Chain Reaction(RT-PCR) test is widely accepted as a standard diagnostic tool for detecting ...infection, however, the severity of infection can't be measured accurately with RT-PCR results. Chest CT Scans of infected patients can manifest the presence of lesions with high sensitivity. During the pandemic, there is a dearth of competent doctors to examine chest CT images. Therefore, a Guided Gradcam based Explainable Classification and Segmentation system (GGECS) which is a real-time explainable classification and lesion identification decision support system is proposed in this work. The classification model used in the proposed GGECS system is inspired by Res2Net. Explainable AI techniques like GradCam and Guided GradCam are used to demystify Convolutional Neural Networks (CNNs). These explainable systems can assist in localizing the regions in the CT scan that contribute significantly to the system's prediction. The segmentation model can further reliably localize infected regions. The segmentation model is a fusion between the VGG-16 and the classification network. The proposed classification model in GGECS obtains an overall accuracy of 98.51 % and the segmentation model achieves an IoU score of 0.595.
Purpose
The worldwide spread of the SARS‐CoV‐2 virus poses unprecedented challenges to medical resources and infection prevention and control measures around the world. In this case, a rapid and ...effective detection method for COVID‐19 can not only relieve the pressure of the medical system but find and isolate patients in time, to a certain extent, slow down the development of the epidemic. In this paper, we propose a method that can quickly and accurately diagnose whether pneumonia is viral pneumonia, and classify viral pneumonia in a fine‐grained way to diagnose COVID‐19.
Methods
We proposed a Cascade Squeeze‐Excitation and Moment Exchange (Cascade‐SEME) framework that can effectively detect COVID‐19 cases by evaluating the chest x‐ray images, where SE is the structure we designed in the network which has attention mechanism, and ME is a method for image enhancement from feature dimension. The framework integrates a model for a coarse level detection of virus cases among other forms of lung infection, and a model for fine‐grained categorisation of pneumonia types identifying COVID‐19 cases. In addition, a Regional Learning approach is proposed to mitigate the impact of non‐lesion features on network training. The network output is also visualised, highlighting the likely areas of lesion, to assist experts’ assessment and diagnosis of COVID‐19.
Results
Three datasets were used: a set of Chest x‐ray Images for Classification with bacterial pneumonia, viral pneumonia and normal chest x‐rays, a COVID chest x‐ray dataset with COVID‐19, and a Lung Segmentation dataset containing 1000 chest x‐rays with masks in the lung region. We evaluated all the models on the test set. The results shows the proposed SEME structure significantly improves the performance of the models: in the task of pneumonia infection type diagnosis, the sensitivity, specificity, accuracy and F1 score of ResNet50 with SEME structure are significantly improved in each category, and the accuracy and AUC of the whole test set are also enhanced; in the detection task of COVID‐19, the evaluation results shows that when SEME structure was added to the task, the sensitivities, specificities, accuracy and F1 scores of ResNet50 and DenseNet169 are improved. Although the sensitivities and specificities are not significantly promoted, SEME well balanced these two significant indicators. Regional learning also plays an important role. Experiments show that Regional Learning can effectively correct the impact of non‐lesion features on the network, which can be seen in the Grad‐CAM method.
Conclusions
Experiments show that after the application of SEME structure in the network, the performance of SEME‐ResNet50 and SEME‐DenseNet169 in both two datasets show a clear enhancement. And the proposed regional learning method effectively directs the network’s attention to focus on relevant pathological regions in the lung radiograph, ensuring the performance of the proposed framework even when a small training set is used. The visual interpretation step using Grad‐CAM finds that the region of attention on radiographs of different types of pneumonia are located in different regions of the lungs.
In this work, we propose a parallel grad‐div stabilized finite element algorithm for the Navier–Stokes equations attached with a nonlinear damping term, using a fully overlapping domain decomposition ...approach. In the proposed algorithm, we calculate a local solution in a defined subdomain on a global composite mesh which is fine around the defined subdomain and coarse in other regions. The algorithm is simple to carry out on the basis of available sequential solvers. By a local a priori estimate of the finite element solution, we deduce error bounds of the approximations from our presented algorithm. We perform also some numerical experiments to verify the effectiveness of the proposed algorithm.
A parallel grad‐div stabilized finite element algorithm based on fully overlapping domain decomposition is proposed for the Navier–Stokes equations with damping. The algorithm calculates a local solution in a subdomain on a global composite mesh that is locally refined around the subdomain, making it simple to carry out on the basis of available sequential solvers. Effectiveness of the algorithm is verified by theoretical analysis and numerical experiments.