Globally, cervical cancer remains as the foremost prevailing cancer in females. Hence, it is necessary to distinguish the importance of risk factors of cervical cancer to classify potential patients. ...The present work proposes a cervical cancer prediction model (CCPM) that offers early prediction of cervical cancer using risk factors as inputs. The CCPM first removes outliers by using outlier detection methods such as density-based spatial clustering of applications with noise (DBSCAN) and isolation forest (iForest) and by increasing the number of cases in the dataset in a balanced way, for example, through synthetic minority over-sampling technique (SMOTE) and SMOTE with Tomek link (SMOTETomek). Finally, it employs random forest (RF) as a classifier. Thus, CCPM lies on four scenarios: (1) DBSCAN + SMOTETomek + RF, (2) DBSCAN + SMOTE+ RF, (3) iForest + SMOTETomek + RF, and (4) iForest + SMOTE + RF. A dataset of 858 potential patients was used to validate the performance of the proposed method. We found that combinations of iForest with SMOTE and iForest with SMOTETomek provided better performances than those of DBSCAN with SMOTE and DBSCAN with SMOTETomek. We also observed that RF performed the best among several popular machine learning classifiers. Furthermore, the proposed CCPM showed better accuracy than previously proposed methods for forecasting cervical cancer. In addition, a mobile application that can collect cervical cancer risk factors data and provides results from CCPM is developed for instant and proper action at the initial stage of cervical cancer.
•A Pixel Increase along Limit (PIaL) based enhancement method is proposed.•Tumor segmentation is performed through saliency based deep learning.•PSO optimization and entropy padding based active ...features selection.•A features based extensive comparison is conducted.
Glioma is a kind of brain tumor that can arise at a distinct location along with dissimilar appearance and size. The high-grade glioma (HGG) is a serious kind of cancer when compare to low-graded glioma (LGG). The manual diagnosis process of these tumors is tiring and time consuming. Therefore, in clinical practices, MRI is useful to assess gliomas as it provides essential information of tumor regions. In this manuscript, an active deep learning-based feature selection approach is suggested to segment and recognize brain tumors. Contrast enhancement is made in the primary step and supplied to SbDL for saliency map construction, which later converts into binarized form by applying simple thresholding. In the classification phase, the Inception V3 pre-trained CNN model is employed for deep feature extraction. These features are simply concatenated along with dominant rotated LBP (DRLBP) for better texture analysis. Later, the concatenated vector is optimized through particle swarm optimization (PSO), so as to classify using softmax classifier. The experiments are conducted in two phases. At first, the segmentation approach SbDL is validated on BRATS2017 and BRATS2018 datasets. The achieved dice score for the BRAST2017 dataset is 83.73% for core tumor, 93.7% for the whole tumor and 79.94% for enhanced tumor. For BRATS2018 dataset, dice score obtained is 88.34% (core), 91.2% (whole) and 81.84% (enhanced). At the second, the classification strategy is applied on BRATS2013, 2014, 2017 and 2018 with an average accuracy of more than 92%. The overall results show that the presented method outperforms for both segmentation and classification of brain tumors.
•Decorrelation formulation based contrast improvement.•Lesion segmentation using modified MASK RCNN.•Transfer Learning based CNN features are extracted.•A Entropy-controlled LS-SVM based best CNN ...features are selected.
Malignant melanoma is considered to be one of the deadliest types of skin cancers which is responsible for the massive number of deaths worldwide. According to the American Cancer Society (ACS), more than a million Americans are living with this melanoma. Since 2019, 192,310 new cases of melanoma are registered, where 95,380 are noninvasive, and 96,480 are invasive. The numbers of deaths due to melanoma in 2019 alone are 7,230, comprising 4,740 men and 2,490 women. Melanoma may be curable if diagnosed at the earlier stages; however, the manual diagnosis is time-consuming and also dependent on the expert dermatologist. In this work, a fully automated computerized aided diagnosis (CAD) system is proposed based on the deep learning framework. In the proposed scheme, the original dermoscopic images are initially pre-processed using the decorrelation formulation technique, which later passes the resultant images to the MASK-RCNN for the lesion segmentation. In this step, the MASK RCNN model is trained using the segmented RGB images generated from the ground truth images of ISBI2016 and ISIC2017 datasets. The resultant segmented images are later passed to the DenseNet deep model for feature extraction. Two different layers, average pool and fully connected, are used for feature extraction, which are later combined, and the resultant vector is forwarded to the feature selection block for down - sampling using proposed entropy-controlled least square SVM (LS-SVM). Three datasets are utilized for validation - ISBI2016, ISBI2017, and HAM10000 to achieve an accuracy of 96.3%, 94.8%, and 88.5% respectively. Further, the performance of MASK-RCNN is also validated on ISBI2016 and ISBI2017 to attain an accuracy of 93.6% and 92.7%. To further increase our confidence in the proposed framework, a fair comparison with other state-of-the-art is also provided.
•A contrast stretching technique is proposed to enhance the contrast of infected region.•Construction of a codebook using an improved texture, color, and geometric features.•Implement a feature ...selection technique based on PCA, skewness, and entropy.•Preparing the database of diseases images for citrus leaves.
In agriculture, plant diseases are primarily responsible for the reduction in production which causes economic losses. In plants, citrus is used as a major source of nutrients like vitamin C throughout the world. However, ‘Citrus’ diseases badly effect the production and quality of citrus fruits. From last decade, the computer vision and image processing techniques have been widely used for detection and classification of diseases in plants. In this article, we propose a hybrid method for detection and classification of diseases in citrus plants. The proposed method consists of two primary phases; (a) detection of lesion spot on the citrus fruits and leaves; (b) classification of citrus diseases. The citrus lesion spots are extracted by an optimized weighted segmentation method, which is performed on an enhanced input image. Then, color, texture, and geometric features are fused in a codebook. Furthermore, the best features are selected by implementing a hybrid feature selection method, which consists of PCA score, entropy, and skewness-based covariance vector. The selected features are fed to Multi-Class Support Vector Machine (M-SVM) for final citrus disease classification. The proposed technique is tested on Citrus Disease Image Gallery Dataset, Combined dataset (Plant Village and Citrus Images Database of Infested with Scale), and our own collected images database. We used these datasets for detection and classification of citrus diseases namely anthracnose, black spot, canker, scab, greening, and melanose. The proposed technique outperforms the existing methods and achieves 97% classification accuracy on citrus disease image gallery dataset, 89% on combined dataset and 90.4% on our local dataset.
•Discussed challenges for detection and classification of citrus plant diseases.•Briefly explains recent studies including segmentation and classification.•Compare this review with existing state of ...the arts.•Discussed the advantages and drawbacks of each step with detail.
The citrus plants such as lemons, mandarins, oranges, tangerines, grapefruits, and limes are commonly grown fruits all over the world. The citrus producing companies create a large amount of waste every year whereby 50% of citrus peel is destroyed every year due to different plant diseases. This paper presents a survey on the different methods relevant to citrus plants leaves diseases detection and the classification. The article presents a detailed taxonomy of citrus leaf diseases. Initially, the challenges of each step are discussed in detail, which affects the detection and classification accuracy. In addition, a thorough literature review of automated disease detection and classification methods is presented. To this end, we study different image preprocessing, segmentation, feature extraction, features selection, and classification methods. In addition, also discuss the importance of features extraction and deep learning methods. The survey presents the detailed discussion on studies, outlines their strengths and limitations, and uncovers further research issues. The survey results reveal that the adoption of automated detection and classification methods for citrus plants diseases is still in its infancy. Hence new tools are needed to fully automate the detection and classification processes.
Vision-based human action recognition (HAR) is a hot topic of research from the decade due to a few popular applications such as visual surveillance and robotics. For correct action recognition, ...various local and global points are requires known as features. These features modified during the variation in human movement. But due to a bit change in several human actions, the features of these actions are mixed that degrade the recognition performance. In this article, we design a new 26-layered Convolutional Neural Network (CNN) architecture for accurate complex action recognition. The features are extracted from the global average pooling layer and fully connected (FC) layer, and fused by a proposed high entropy-based approach. Further, we propose a feature selection method name Poisson distribution along with Univariate Measures (PDaUM). Few of fused CNN features are irrelevant, and few of them are redundant that makes the incorrect prediction among complex human actions. Therefore, the proposed PDaUM based approach selects only the strongest features that later passed to the Extreme Learning Machine (ELM) and Softmax for final recognition. Four datasets are using for experimental analysis - HMDB51 (51 classes), UCF Sports (10 classes), KTH (6 classes), and Weizmann (10 classes). On these datasets, the ELM classifier gives an improved performance as compared to a Softmax classifier. The achieved accuracy on each dataset is 81.4%, 99.2%, 98.3%, and 98.7%, respectively. Comparison with existing techniques, it is shown that the proposed architecture gives better performance in terms of accuracy and testing time.
Brain tumor is one of the most dreadful natures of cancer and caused a huge number of deaths among kids and adults from the past few years. According to WHO standard, the 700,000 humans are being ...with a brain tumor and around 86,000 are diagnosed since 2019. While the total number of deaths due to brain tumors is 16,830 since 2019 and the average survival rate is 35%. Therefore, automated techniques are needed to grade brain tumors precisely from MRI scans. In this work, a new deep learning‐based method is proposed for microscopic brain tumor detection and tumor type classification. A 3D convolutional neural network (CNN) architecture is designed at the first step to extract brain tumor and extracted tumors are passed to a pretrained CNN model for feature extraction. The extracted features are transferred to the correlation‐based selection method and as the output, the best features are selected. These selected features are validated through feed‐forward neural network for final classification. Three BraTS datasets 2015, 2017, and 2018 are utilized for experiments, validation, and accomplished an accuracy of 98.32, 96.97, and 92.67%, respectively. A comparison with existing techniques shows the proposed design yields comparable accuracy.
3D convolutional neural network (CNN) architecture is proposed for tumor extraction. The pretrained VGG19 CNN model is utilized for feature extraction. Correlation‐based along FNN‐based best features are selected. Results are validated for segmentation and classification steps.
Human action recognition (HAR) has gained much attention in the last few years due to its enormous applications including human activity monitoring, robotics, visual surveillance, to name but a few. ...Most of the previously proposed HAR systems have focused on using hand-crafted images features. However, these features cover limited aspects of the problem and show performance degradation on a large and complex datasets. Therefore, in this work, we propose a novel HAR system which is based on the fusion of conventional hand-crafted features using histogram of oriented gradients (HoG) and deep features. Initially, human silhouette is extracted with the help of saliency-based method - implemented in two phases. In the first phase, motion and geometric features are extracted from the selected channel, whilst, second phase calculates the Chi-square distance between the extracted and threshold-based minimum distance features. Afterwards, extracted deep CNN and hand-crafted features are fused to generate a resultant vector. Moreover, to cope with the curse of dimensionality, an entropy-based feature selection technique is also proposed to identify the most discriminant features for classification using multi-class support vector machine (M-SVM). All the simulations are performed on five publicly available benchmark datasets including Weizmann, UCF11 (YouTube), UCF Sports, IXMAS, and UT-Interaction. A comparative evaluation is also presented to show that our proposed model achieves superior performances in comparison to a few exiting methods.
•Motion and Geometric features are extracted for human flow estimation and silhouette extraction.•Deep CNN and hand crafted features are fused through parallel approach.•Entropy-controlled Chi-square approach is proposed for best features selection.•Experiments are performed on several well-known datasets.
In this article, we implement an action recognition technique based on features fusion and best feature selection. In the proposed method, HSI color transformation is performed in the first step to ...improve the contrast of video frames and then extract their motion features by optical flow algorithm. The frames fusion approach extracts the moving regions that find out by optical flow. After that, extract shape and texture features fused by a new parallel approach name length control features. A new Weighted Entropy-Variances approach is applied to a combined vector and selects the best of them for classification. Finally, features are passed in M-SVM for final features classification into relevant human actions. The experimental process is conducted in four famous action datasets- Weizmann, KTH, UCF Sports, and UCF YouTube, with recognition rate 97.9%, 100%, 99.3%, and 94.5%, respectively. Experimental results show that the proposed scheme performed significantly sound output concerning listed methods.
Display omitted
•A sparse activation function is applied to find out the locations of active regions.•Fused two segmented frames using multiplication law of probability.•Features are fused using a parallel approach name length control features (LCF).•Weighted Entropy-Variance controlled approach is proposed for features selection.
In condition based maintenance, different signal processing techniques are used to sense the faults through the vibration and acoustic emission signals, received from the machinery. These signal ...processing approaches mostly utilise time, frequency, and time-frequency domain analysis. The features obtained are later integrated with the different machine learning techniques to classify the faults into different categories. In this work, different statistical features of vibration signals in time and frequency domains are studied for the detection and localisation of faults in the roller bearings. These are later classified into healthy, outer race fault, inner race fault, and ball fault classes. The statistical features including skewness, kurtosis, average and root mean square values of time domain vibration signals are considered. These features are extracted from the second derivative of the time domain vibration signals and power spectral density of vibration signals. The vibration signal is also converted to the frequency domain and the same features are extracted. All three feature sets are concatenated, creating the time, frequency and spectral power domain feature vectors. These feature vectors are finally fed into the K- nearest neighbour, support vector machine and kernel linear discriminant analysis for the detection and classification of bearing faults. With the proposed method, the reduction percentage of more than 95% percent is achieved, which not only reduces the computational burden but also the classification time. Simulation results show that the signals are classified to achieve an average accuracy of 99.13% using KLDA and 96.64% using KNN classifiers. The results are also compared with the empirical mode decomposition (EMD) features and Fourier transform features without extracting any statistical information, which are two of the most widely used approaches in the literature. To gain a certain level of confidence in the classification results, a detailed statistical analysis is also provided.