Inclusion of a term −γ∇∇⋅u, forcing ∇⋅u to be pointwise small, is an effective tool for improving mass conservation in discretizations of incompressible flows. However, the added grad-div term ...couples all velocity components decreasing sparsity and increasing the condition number in the linear systems that must be solved every time step. To address these three issues various sparse grad-div regularizations and a modular grad-div method have been developed. We develop and analyze herein a synthesis of a fully decoupled, parallel sparse grad-div method of Guermond and Minev with the modular grad-div method. Let G⁎=−diag(∂x2,∂y2,∂z2) denote the diagonal of G=−∇∇⋅, and α≥0 an adjustable parameter. The 2-step method considered is1:u˜n+1−unk+un⋅∇u˜n+1+∇pn+1−νΔu˜n+1=f & ∇⋅u˜n+1=0,2:1kI+(γ+α)G⁎un+1=1ku˜n+1+(γ+α)G⁎−γGun. We prove its unconditional, nonlinear, long time stability in 3d for α≥0.5γ. The analysis also establishes that the method controls the persistent size of ||∇⋅u|| in general and controls the transients in ||∇⋅u|| when u(x,0)=0 and f(x,t)≠0 provided α>0.5γ. Consistent numerical tests are presented.
Summary Background National levels of personal health-care access and quality can be approximated by measuring mortality rates from causes that should not be fatal in the presence of effective ...medical care (ie, amenable mortality). Previous analyses of mortality amenable to health care only focused on high-income countries and faced several methodological challenges. In the present analysis, we use the highly standardised cause of death and risk factor estimates generated through the Global Burden of Diseases, Injuries, and Risk Factors Study (GBD) to improve and expand the quantification of personal health-care access and quality for 195 countries and territories from 1990 to 2015. Methods We mapped the most widely used list of causes amenable to personal health care developed by Nolte and McKee to 32 GBD causes. We accounted for variations in cause of death certification and misclassifications through the extensive data standardisation processes and redistribution algorithms developed for GBD. To isolate the effects of personal health-care access and quality, we risk-standardised cause-specific mortality rates for each geography-year by removing the joint effects of local environmental and behavioural risks, and adding back the global levels of risk exposure as estimated for GBD 2015. We employed principal component analysis to create a single, interpretable summary measure–the Healthcare Quality and Access (HAQ) Index–on a scale of 0 to 100. The HAQ Index showed strong convergence validity as compared with other health-system indicators, including health expenditure per capita (r=0·88), an index of 11 universal health coverage interventions ( r =0·83), and human resources for health per 1000 ( r =0·77). We used free disposal hull analysis with bootstrapping to produce a frontier based on the relationship between the HAQ Index and the Socio-demographic Index (SDI), a measure of overall development consisting of income per capita, average years of education, and total fertility rates. This frontier allowed us to better quantify the maximum levels of personal health-care access and quality achieved across the development spectrum, and pinpoint geographies where gaps between observed and potential levels have narrowed or widened over time. Findings Between 1990 and 2015, nearly all countries and territories saw their HAQ Index values improve; nonetheless, the difference between the highest and lowest observed HAQ Index was larger in 2015 than in 1990, ranging from 28·6 to 94·6. Of 195 geographies, 167 had statistically significant increases in HAQ Index levels since 1990, with South Korea, Turkey, Peru, China, and the Maldives recording among the largest gains by 2015. Performance on the HAQ Index and individual causes showed distinct patterns by region and level of development, yet substantial heterogeneities emerged for several causes, including cancers in highest-SDI countries; chronic kidney disease, diabetes, diarrhoeal diseases, and lower respiratory infections among middle-SDI countries; and measles and tetanus among lowest-SDI countries. While the global HAQ Index average rose from 40·7 (95% uncertainty interval, 39·0–42·8) in 1990 to 53·7 (52·2–55·4) in 2015, far less progress occurred in narrowing the gap between observed HAQ Index values and maximum levels achieved; at the global level, the difference between the observed and frontier HAQ Index only decreased from 21·2 in 1990 to 20·1 in 2015. If every country and territory had achieved the highest observed HAQ Index by their corresponding level of SDI, the global average would have been 73·8 in 2015. Several countries, particularly in eastern and western sub-Saharan Africa, reached HAQ Index values similar to or beyond their development levels, whereas others, namely in southern sub-Saharan Africa, the Middle East, and south Asia, lagged behind what geographies of similar development attained between 1990 and 2015. Interpretation This novel extension of the GBD Study shows the untapped potential for personal health-care access and quality improvement across the development spectrum. Amid substantive advances in personal health care at the national level, heterogeneous patterns for individual causes in given countries or territories suggest that few places have consistently achieved optimal health-care access and quality across health-system functions and therapeutic areas. This is especially evident in middle-SDI countries, many of which have recently undergone or are currently experiencing epidemiological transitions. The HAQ Index, if paired with other measures of health-system characteristics such as intervention coverage, could provide a robust avenue for tracking progress on universal health coverage and identifying local priorities for strengthening personal health-care quality and access throughout the world. Funding Bill & Melinda Gates Foundation.
The world is suffering from an existential global health crisis known as the COVID-19 pandemic. Countries like India, Bangladesh, and other developing countries are still having a slow pace in the ...detection of COVID-19 cases. Therefore, there is an urgent need for fast detection with clear visualization of infection is required using which a suspected patient of COVID-19 could be saved. In the recent technological advancements, the fusion of deep learning classifiers and medical images provides more promising results corresponding to traditional RT-PCR testing while making detection and predictions about COVID-19 cases with increased accuracy. In this paper, we have proposed a deep transfer learning algorithm that accelerates the detection of COVID-19 cases by using X-ray and CT-Scan images of the chest. It is because, in COVID-19, initial screening of chest X-ray (CXR) may provide significant information in the detection of suspected COVID-19 cases. We have considered three datasets known as 1) COVID-chest X-ray, 2) SARS-COV-2 CT-scan, and 3) Chest X-Ray Images (Pneumonia). In the obtained results, the proposed deep learning model can detect the COVID-19 positive cases in ≤ 2 seconds which is faster than RT-PCR tests currently being used for detection of COVID-19 cases. We have also established a relationship between COVID-19 patients along with the Pneumonia patients which explores the pattern between Pneumonia and COVID-19 radiology images. In all the experiments, we have used the Grad-CAM based color visualization approach in order to clearly interpretate the detection of radiology images and taking further course of action.
We propose a technique for producing ‘visual explanations’ for decisions from a large class of Convolutional Neural Network (CNN)-based models, making them more transparent and explainable. Our ...approach—Gradient-weighted Class Activation Mapping (Grad-CAM), uses the gradients of any target concept (say ‘dog’ in a classification network or a sequence of words in captioning network) flowing into the final convolutional layer to produce a coarse localization map highlighting the important regions in the image for predicting the concept. Unlike previous approaches, Grad-CAM is applicable to a wide variety of CNN model-families: (1) CNNs with fully-connected layers (
e.g.
VGG), (2) CNNs used for structured outputs (
e.g.
captioning), (3) CNNs used in tasks with multi-modal inputs (
e.g.
visual question answering) or reinforcement learning, all
without architectural changes or re-training
. We combine Grad-CAM with existing fine-grained visualizations to create a high-resolution class-discriminative visualization, Guided Grad-CAM, and apply it to image classification, image captioning, and visual question answering (VQA) models, including ResNet-based architectures. In the context of image classification models, our visualizations (a) lend insights into failure modes of these models (showing that seemingly unreasonable predictions have reasonable explanations), (b) outperform previous methods on the ILSVRC-15 weakly-supervised localization task, (c) are robust to adversarial perturbations, (d) are more faithful to the underlying model, and (e) help achieve model generalization by identifying dataset bias. For image captioning and VQA, our visualizations show that even non-attention based models learn to localize discriminative regions of input image. We devise a way to identify important neurons through Grad-CAM and combine it with neuron names (Bau et al. in Computer vision and pattern recognition, 2017) to provide textual explanations for model decisions. Finally, we design and conduct human studies to measure if Grad-CAM explanations help users establish appropriate trust in predictions from deep networks and show that Grad-CAM helps untrained users successfully discern a ‘stronger’ deep network from a ‘weaker’ one even when both make identical predictions. Our code is available at
https://github.com/ramprs/grad-cam/
, along with a demo on CloudCV (Agrawal et al., in: Mobile cloud visual media computing, pp 265–290. Springer, 2015) (
http://gradcam.cloudcv.org
) and a video at
http://youtu.be/COjUB9Izk6E
.
to develop a deep learning radiomics graph network (DLRN) that integrates deep learning features extracted from gray scale ultrasonography, radiomics features and clinical features, for ...distinguishing parotid pleomorphic adenoma (PA) from adenolymphoma (AL)
A total of 287 patients (162 in training cohort, 70 in internal validation cohort and 55 in external validation cohort) from two centers with histologically confirmed PA or AL were enrolled. Deep transfer learning features and radiomics features extracted from gray scale ultrasound images were input to machine learning classifiers including logistic regression (LR), support vector machines (SVM), KNN, RandomForest (RF), ExtraTrees, XGBoost, LightGBM, and MLP to construct deep transfer learning radiomics (DTL) models and Rad models respectively. Deep learning radiomics (DLR) models were constructed by integrating the two features and DLR signatures were generated. Clinical features were further combined with the signatures to develop a DLRN model. The performance of these models was evaluated using receiver operating characteristic (ROC) curve analysis, calibration, decision curve analysis (DCA), and the Hosmer-Lemeshow test.
In the internal validation cohort and external validation cohort, comparing to Clinic (AUC=0.767 and 0.777), Rad (AUC=0.841 and 0.748), DTL (AUC=0.740 and 0.825) and DLR (AUC=0.863 and 0.859), the DLRN model showed greatest discriminatory ability (AUC=0.908 and 0.908) showed optimal discriminatory ability.
The DLRN model built based on gray scale ultrasonography significantly improved the diagnostic performance for benign salivary gland tumors. It can provide clinicians with a non-invasive and accurate diagnostic approach, which holds important clinical significance and value. Ensemble of multiple models helped alleviate overfitting on the small dataset compared to using Resnet50 alone.
RNA–protein interactions are the crucial basis for many steps of bacterial gene expression, including post‐transcriptional control by small regulatory RNAs (sRNAs). In stark contrast to recent ...progress in the analysis of Gram‐negative bacteria, knowledge about RNA–protein complexes in Gram‐positive species remains scarce. Here, we used the Grad‐seq approach to draft a comprehensive landscape of such complexes in Streptococcus pneumoniae, in total determining the sedimentation profiles of ~ 88% of the transcripts and ~ 62% of the proteins of this important human pathogen. Analysis of in‐gradient distributions and subsequent tag‐based protein capture identified interactions of the exoribonuclease Cbf1/YhaM with sRNAs that control bacterial competence for DNA uptake. Unexpectedly, the nucleolytic activity of Cbf1 stabilizes these sRNAs, thereby promoting their function as repressors of competence. Overall, these results provide the first RNA/protein complexome resource of a Gram‐positive species and illustrate how this can be utilized to identify new molecular factors with functions in RNA‐based regulation of virulence‐relevant pathways.
Synopsis
Application of Grad‐seq analysis to Streptococcus pneumoniae provides the first cellular RNA/protein complexome resource for a Gram‐positive bacterium, and uncovers a specific exonuclease as a new player in the competence regulon for DNA uptake and pneumococcal virulence.
Grad‐seq analyses predict complexes for thousands of Streptococcus pneumoniae RNAs and proteins.
Grad‐seq‐predicted complexes aid functional characterization also for other Gram‐positive bacteria.
Cbf1 is identified as an exonuclease that trims and stabilizes small RNAs in the competence regulon.
Cbf1 acts as a negative regulator of competence in S. pneumoniae.
Comprehensive assessment of protein‐RNA complexes in Streptococcus pneumoniae via Grad‐seq uncovers an unexpected role for the exoribonuclease Cbf1 in stabilizing sRNAs that control bacterial competence for DNA uptake.
We report on the detailed and systematic study of field line twist and length distributions within magnetic flux ropes embedded in interplanetary coronal mass ejections (ICMEs). The Grad‐Shafranov ...reconstruction method is utilized together with a constant‐twist nonlinear force‐free (Gold‐Hoyle) flux rope model to reveal the close relation between the field line twist and length in cylindrical flux ropes, based on in situ Wind spacecraft measurements. We show that the field line twist distributions within interplanetary flux ropes are inconsistent with the Lundquist model. In particular, we utilize the unique measurements of magnetic field line lengths within selected ICME events as provided by Kahler et al. () based on energetic electron burst observations at 1 AU and the associated type III radio emissions detected by the Wind spacecraft. These direct measurements are compared with our model calculations to help assess the flux rope interpretation of the embedded magnetic structures. By using the different flux rope models, we show that the in situ direct measurements of field line lengths are consistent with a flux rope structure with spiral field lines of constant and low twist, largely different from that of the Lundquist model, especially for relatively large‐scale flux ropes.
Key Points
Field line twist of MCs remains fairly constant
Field line lengths inside MCs are consistent with flux rope interpretations
Axial length of cylindrical flux ropes is within 1 and 2 AU
In the detection of COVID-19, chest X-ray (CXR) images and CT scan images are two main technical methods, which provide an important basis for doctors' diagnosis. Currently, convolutional neural ...network (CNN) in detecting the COVID-19 medical radioactive images has problems of low accuracy, complex algorithms, and inability to mark feature regions. In order to solve these problems, this paper proposes an algorithm combining Grad-CAM color visualization and convolutional neural network (GCCV-CNN). The algorithm can quickly classify lung X-ray images and CT scan images of COVID-19-positive patients, COVID-19-negative patients, general pneumonia patients and healthy people. At the same time, it can quickly locate the critical area in X-ray images and CT images. Finally, the algorithm can get more accurate detection results through the synthesis of deep learning algorithms. In order to verify the effectiveness of the GCCV-CNN algorithm, experiments are performed on three COVID-19-positive patient datasets and it
•Classification framework to distinguish between COVID-19, normal and pneumonia CXR images•Handpicked features in conjunction with those obtained via transfer learning using ResNet50•10-fold ...cross-validation accuracy of 0.974 ± 0.02 at 95% confidence interval•Gradient-based localizations captured using Grad-CAM serve as clinical evidence•Efficacy of the proposed work, ascertained by validation on an independent cohort
Coronaviruses are a family of viruses that majorly cause respiratory disorders in humans. Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) is a new strain of coronavirus that causes the coronavirus disease 2019 (COVID-19). WHO has identified COVID-19 as a pandemic as it has spread across the globe due to its highly contagious nature. For early diagnosis of COVID-19, the reverse transcription-polymerase chain reaction (RT-PCR) test is commonly done. However, it suffers from a high false-negative rate of up to 67% if the test is done during the first five days of exposure. As an alternative, research on the efficacy of deep learning techniques employed in the identification of COVID-19 disease using chest X-ray images is intensely pursued.
As pneumonia and COVID-19 exhibit similar/ overlapping symptoms and affect the human lungs, a distinction between the chest X-ray images of pneumonia patients and COVID-19 patients becomes challenging. In this work, we have modeled the COVID-19 classification problem as a multiclass classification problem involving three classes, namely COVID-19, pneumonia, and normal. We have proposed a novel classification framework which combines a set of handpicked features with those obtained from a deep convolutional neural network. The proposed framework comprises of three modules. In the first module, we exploit the strength of transfer learning using ResNet-50 for training the network on a set of preprocessed images and obtain a vector of 2048 features. In the second module, we construct a pool of frequency and texture based 252 handpicked features that are further reduced to a set of 64 features using PCA. Subsequently, these are passed to a feed forward neural network to obtain a set of 16 features. The third module concatenates the features obtained from first and second modules, and passes them to a dense layer followed by the softmax layer to yield the desired classification model. We have used chest X-ray images of COVID-19 patients from four independent publicly available repositories, in addition to images from the Mendeley and Kaggle Chest X-Ray Datasets for pneumonia and normal cases.
To establish the efficacy of the proposed model, 10-fold cross-validation is carried out. The model generated an overall classification accuracy of 0.974 ± 0.02 and a sensitivity of 0.987 ± 0.05, 0.963 ± 0.05, and 0.973 ± 0.04 at 95% confidence interval for COVID-19, normal, and pneumonia classes, respectively. To ensure the effectiveness of the proposed model, it was validated using an independent Chest X-ray cohort and an overall classification accuracy of 0.979 was achieved. Comparison of the proposed framework with state-of-the-art methods reveal that the proposed framework outperforms others in terms of accuracy and sensitivity. Since interpretability of results is crucial in the medical domain, the gradient-based localizations are captured using Gradient-weighted Class Activation Mapping (Grad-CAM). In summary, the results obtained are stable over independent cohorts and interpretable using Grad-CAM localizations that serve as clinical evidence.
This study presents the investigation of optical emission spectroscopy of plasma using interpretable convolutional neural network (CNN) for real-time volatile organic compounds (VOCs) classification. ...A microplasma-generation platform was developed to efficiently collect 64 k spectra from various types of VOCs at different concentrations, as training and testing sets for machine learning. A CNN model was trained to classify VOCs with accuracy of 99.9%. To interpret the CNN model and its predictions, the spectral processing mechanism of the CNN was visualized by feature maps and the critical spectral features were identified by gradient-weighted class activation mapping. Such approaches brought insights on how CNN analyzes the spectra and enables the CNN operation to be explainable. Finally, the CNN model was incorporated with the microplasma platform to demonstrate the application of real-time VOC monitoring. The type of VOCs can be identified and reported via messages within 10 s once the microplasma is ignited. We believe that using CNN brings a novel route for plasma spectroscopy analysis for VOC classification and impacts the fields of plasma, spectroscopy, and environmental monitoring.
Display omitted
•CNN was used to analyze plasma spectroscopy for VOC identification.•The CNN model was used to classify the VOCs with accuracy >99.8%.•Grad-CAM was used to interpret the CNN predictions.•Real-time and online monitoring of VOCs was performed with instant warning message.