•Decorrelation formulation based contrast improvement.•Lesion segmentation using modified MASK RCNN.•Transfer Learning based CNN features are extracted.•A Entropy-controlled LS-SVM based best CNN ...features are selected.
Malignant melanoma is considered to be one of the deadliest types of skin cancers which is responsible for the massive number of deaths worldwide. According to the American Cancer Society (ACS), more than a million Americans are living with this melanoma. Since 2019, 192,310 new cases of melanoma are registered, where 95,380 are noninvasive, and 96,480 are invasive. The numbers of deaths due to melanoma in 2019 alone are 7,230, comprising 4,740 men and 2,490 women. Melanoma may be curable if diagnosed at the earlier stages; however, the manual diagnosis is time-consuming and also dependent on the expert dermatologist. In this work, a fully automated computerized aided diagnosis (CAD) system is proposed based on the deep learning framework. In the proposed scheme, the original dermoscopic images are initially pre-processed using the decorrelation formulation technique, which later passes the resultant images to the MASK-RCNN for the lesion segmentation. In this step, the MASK RCNN model is trained using the segmented RGB images generated from the ground truth images of ISBI2016 and ISIC2017 datasets. The resultant segmented images are later passed to the DenseNet deep model for feature extraction. Two different layers, average pool and fully connected, are used for feature extraction, which are later combined, and the resultant vector is forwarded to the feature selection block for down - sampling using proposed entropy-controlled least square SVM (LS-SVM). Three datasets are utilized for validation - ISBI2016, ISBI2017, and HAM10000 to achieve an accuracy of 96.3%, 94.8%, and 88.5% respectively. Further, the performance of MASK-RCNN is also validated on ISBI2016 and ISBI2017 to attain an accuracy of 93.6% and 92.7%. To further increase our confidence in the proposed framework, a fair comparison with other state-of-the-art is also provided.
Full text
Available for:
GEOZS, IJS, IMTLJ, KILJ, KISLJ, NLZOH, NUK, OILJ, PNG, SAZU, SBCE, SBJE, UILJ, UL, UM, UPCLJ, UPUK, ZAGLJ, ZRSKP
Brain tumor identification using magnetic resonance images (MRI) is an important research domain in the field of medical imaging. Use of computerized techniques helps the doctors for the diagnosis ...and treatment against brain cancer. In this article, an automated system is developed for tumor extraction and classification from MRI. It is based on marker‐based watershed segmentation and features selection. Five primary steps are involved in the proposed system including tumor contrast, tumor extraction, multimodel features extraction, features selection, and classification. A gamma contrast stretching approach is implemented to improve the contrast of a tumor. Then, segmentation is done using marker‐based watershed algorithm. Shape, texture, and point features are extracted in the next step and high ranked 70% features are only selected through chi‐square max conditional priority features approach. In the later step, selected features are fused using a serial‐based concatenation method before classifying using support vector machine. All the experiments are performed on three data sets including Harvard, BRATS 2013, and privately collected MR images data set. Simulation results clearly reveal that the proposed system outperforms existing methods with greater precision and accuracy.
An automated system is proposed for tumor extraction and classification based on marker‐based watershed segmentation, features selection. The system includes tumor contrast, tumor extraction, multimodel features extraction, selection and classification.
Full text
Available for:
FZAB, GIS, IJS, KILJ, NLZOH, NUK, OILJ, SAZU, SBCE, SBMB, UL, UM, UPUK
In agriculture farming business, plant diseases are one of the reasons for the financial deficits around the globe. It is the fundamental factor, as it causes significant abatement in both capacity ...and quality of the growing crops. In plants, fruits are amongst the major sources of nutrients, however, there exists a wide range of diseases which adversely affect both quality and production of the fruits. To overcome such predicament, computer vision (CV) based methods are introduced. These methods are quite effective, which not only detect the diseases/infections at the early stages but also assign them a label. In this article, we propose a deep convolutional neural network-based method for the diseases classification of different fruits’ leaves. Initially, the deep features are extracted by utilizing pre-trained deep models including VGG-s and AlexNet, which are later fine-tuned by employing a concept of transfer learning. A multi-level fusion methodology is also proposed, prior to the selection step, based on an entropy-controlled threshold value - calculated by averaging the selected features. The resultant final feature vector is later fed into a host classifier, multi-SVM. Five different diseases are considered for experiments including apple black rot, apple scab, apple rust, cherry powdery mildew, and peach bacterial spots, which are collected from a plant village dataset. Classification results clearly reveal the improved performance of proposed method in terms of sensitivity (97.6%), accuracy (97.8%), precision (97.6%), and G-measure (97.6%).
Human action recognition (HAR) has gained much attention in the last few years due to its enormous applications including human activity monitoring, robotics, visual surveillance, to name but a few. ...Most of the previously proposed HAR systems have focused on using hand-crafted images features. However, these features cover limited aspects of the problem and show performance degradation on a large and complex datasets. Therefore, in this work, we propose a novel HAR system which is based on the fusion of conventional hand-crafted features using histogram of oriented gradients (HoG) and deep features. Initially, human silhouette is extracted with the help of saliency-based method - implemented in two phases. In the first phase, motion and geometric features are extracted from the selected channel, whilst, second phase calculates the Chi-square distance between the extracted and threshold-based minimum distance features. Afterwards, extracted deep CNN and hand-crafted features are fused to generate a resultant vector. Moreover, to cope with the curse of dimensionality, an entropy-based feature selection technique is also proposed to identify the most discriminant features for classification using multi-class support vector machine (M-SVM). All the simulations are performed on five publicly available benchmark datasets including Weizmann, UCF11 (YouTube), UCF Sports, IXMAS, and UT-Interaction. A comparative evaluation is also presented to show that our proposed model achieves superior performances in comparison to a few exiting methods.
•Motion and Geometric features are extracted for human flow estimation and silhouette extraction.•Deep CNN and hand crafted features are fused through parallel approach.•Entropy-controlled Chi-square approach is proposed for best features selection.•Experiments are performed on several well-known datasets.
Full text
Available for:
GEOZS, IJS, IMTLJ, KILJ, KISLJ, NLZOH, NUK, OILJ, PNG, SAZU, SBCE, SBJE, UILJ, UL, UM, UPCLJ, UPUK, ZAGLJ, ZRSKP
Skin cancer is being a most deadly type of cancers which have grown extensively worldwide from the last decade. For an accurate detection and classification of melanoma, several measures should be ...considered which include, contrast stretching, irregularity measurement, selection of most optimal features, and so forth. A poor contrast of lesion affects the segmentation accuracy and also increases classification error. To overcome this problem, an efficient model for accurate border detection and classification is presented. The proposed model improves the segmentation accuracy in its preprocessing phase, utilizing contrast enhancement of lesion area compared to the background. The enhanced 2D blue channel is selected for the construction of saliency map, at the end of which threshold function produces the binary image. In addition, particle swarm optimization (PSO) based segmentation is also utilized for accurate border detection and refinement. Few selected features including shape, texture, local, and global are also extracted which are later selected based on genetic algorithm with an advantage of identifying the fittest chromosome. Finally, optimized features are later fed into the support vector machine (SVM) for classification. Comprehensive experiments have been carried out on three datasets named as PH2, ISBI2016, and ISIC (i.e., ISIC MSK‐1, ISIC MSK‐2, and ISIC UDA). The improved accuracy of 97.9, 99.1, 98.4, and 93.8%, respectively obtained for each dataset. The SVM outperforms on the selected dataset in terms of sensitivity, precision rate, accuracy, and FNR. Furthermore, the selection method outperforms and successfully removed the redundant features.
A hybrid contrast stretching approach is proposed for lesion enhancement. A saliency map is constructed for segmentation of skin lesion. Four types of features are extracted and best features selected with GA, classification is performed using M‐SVM.
Full text
Available for:
FZAB, GIS, IJS, KILJ, NLZOH, NUK, OILJ, SAZU, SBCE, SBMB, UL, UM, UPUK
Human action recognition from a video sequence has received much attention lately in the field of computer vision due to its range of applications in surveillance, healthcare, smart homes, ...tele-immersion, to name but a few. However, it is still facing several challenges such as human variations, occlusion, change in illumination, complex background. In this article, we consider the problems related to multiple human detection and classification using novel statistical weighted segmentation and rank correlation-based feature selection approach. Initially, preprocessing is performed on a set of frames to remove existing noise and to make the foreground maximal differentiable compared to the background. A novel weighted segmentation method is also introduced for human extraction prior to feature extraction. Ternary features are exploited including color, shape, and texture, which are later combined using serial-based features fusion method. To avoid redundancy, rank correlation-based feature selection technique is employed, which acts as a feature optimizer and leads to improved classification accuracy. The proposed method is validated on six datasets including Weizmann, KTH, Muhavi, WVU, UCF sports, and MSR action and validated based on seven performance measures. A fair comparison with existing work is also provided which proves the significance of proposed compared to other techniques.
Full text
Available for:
EMUNI, FIS, FZAB, GEOZS, GIS, IJS, IMTLJ, KILJ, KISLJ, MFDPS, NLZOH, NUK, OBVAL, OILJ, PNG, SAZU, SBCE, SBJE, SBMB, SBNM, UKNU, UL, UM, UPUK, VKSCE, ZAGLJ
Doctor utilizes various kinds of clinical technologies like MRI, endoscopy, CT scan, etc., to identify patient’s deformity during the review time. Among set of clinical technologies, wireless capsule ...endoscopy (WCE) is an advanced procedures used for digestive track malformation. During this complete process, more than 57,000 frames are captured and doctors need to examine a complete video frame by frame which is a tedious task even for an experienced gastrologist. In this article, a novel computerized automated method is proposed for the classification of abdominal infections of gastrointestinal track from WCE images. Three core steps of the suggested system belong to the category of segmentation, deep features extraction and fusion followed by robust features selection. The ulcer abnormalities from WCE videos are initially extracted through a proposed color features based low level and high-level saliency (CFbLHS) estimation method. Later, DenseNet CNN model is utilized and through transfer learning (TL) features are computed prior to feature optimization using Kapur’s entropy. A parallel fusion methodology is opted for the selection of maximum feature value (PMFV). For feature selection, Tsallis entropy is calculated later sorted into descending order. Finally, top 50% high ranked features are selected for classification using multilayered feedforward neural network classifier for recognition. Simulation is performed on collected WCE dataset and achieved maximum accuracy of 99.5% in 21.15 s.
Medical imaging systems installed in different hospitals and labs generate images in bulk, which could support medics to analyze infections or injuries. Manual inspection becomes difficult when there ...exist more images, therefore, intelligent systems are usually required for real‐time diagnosis. Melanoma is one of the most common and severe forms of skin cancer that begins from the cells beneath the skin. Through dermoscopic images, it is possible to diagnose the infection at the early stages. In this regard, different approaches have been exploited for improved results. In this study, we propose a two‐stream deep neural network information fusion framework for multiclass skin cancer classification. The proposed technique follows two streams: initially, a fusion‐based contrast enhancement technique is proposed, which feeds enhanced images to the pretrained DenseNet201 architecture. The extracted features are later optimized using a skewness‐controlled moth–flame optimization algorithm. In the second stream, deep features from the fine‐tuned MobileNetV2 pretrained network are extracted and down‐sampled using the proposed feature selection framework. Finally, most discriminant features from both networks are fused using a new parallel multimax coefficient correlation method. A multiclass extreme learning machine classifier is used to classify lesion images. The testing process is initiated on three imbalanced skin data sets—HAM10000, ISBI2018, and ISIC2019. The simulations are performed without performing any data augmentation step in achieving an accuracy of 96.5%, 98%, and 89%, respectively. A fair comparison with the existing techniques reveals the improved performance of our proposed algorithm.
Full text
Available for:
FZAB, GIS, IJS, KILJ, NLZOH, NUK, OILJ, SAZU, SBCE, SBMB, UL, UM, UPUK
The emergence of cloud infrastructure has the potential to provide significant benefits in a variety of areas in the medical imaging field. The driving force behind the extensive use of cloud ...infrastructure for medical image processing is the exponential increase in the size of computed tomography (CT) and magnetic resonance imaging (MRI) data. The size of a single CT/MRI image has increased manifold since the inception of these imagery techniques. This demand for the introduction of effective and efficient frameworks for extracting relevant and most suitable information (features) from these sizeable images. As early detection of lungs cancer can significantly increase the chances of survival of a lung scanner patient, an effective and efficient nodule detection system can play a vital role. In this article, we have proposed a novel classification framework for lungs nodule classification with less false positive rates (FPRs), high accuracy, sensitivity rate, less computationally expensive and uses a small set of features while preserving edge and texture information. The proposed framework comprises multiple phases that include image contrast enhancement, segmentation, feature extraction, followed by an employment of these features for training and testing of a selected classifier. Image preprocessing and feature selection being the primary steps—playing their vital role in achieving improved classification accuracy. We have empirically tested the efficacy of our technique by utilizing the well‐known Lungs Image Consortium Database dataset. The results prove that the technique is highly effective for reducing FPRs with an impressive sensitivity rate of 97.45%.
A novel classification framework for lungs nodule classification is proposed to reduce false positive rate and achieve impressive sensitivity rate. It is computationally effective, precise results by using few features while preserving edge and texture information.
Full text
Available for:
FZAB, GIS, IJS, KILJ, NLZOH, NUK, OILJ, SAZU, SBCE, SBMB, UL, UM, UPUK
Manual diagnosis of skin cancer is time-consuming and expensive; therefore, it is essential to develop automated diagnostics methods with the ability to classify multiclass skin lesions with greater ...accuracy. We propose a fully automated approach for multiclass skin lesion segmentation and classification by using the most discriminant deep features. First, the input images are initially enhanced using local color-controlled histogram intensity values (LCcHIV). Next, saliency is estimated using a novel Deep Saliency Segmentation method, which uses a custom convolutional neural network (CNN) of ten layers. The generated heat map is converted into a binary image using a thresholding function. Next, the segmented color lesion images are used for feature extraction by a deep pre-trained CNN model. To avoid the curse of dimensionality, we implement an improved moth flame optimization (IMFO) algorithm to select the most discriminant features. The resultant features are fused using a multiset maximum correlation analysis (MMCA) and classified using the Kernel Extreme Learning Machine (KELM) classifier. The segmentation performance of the proposed methodology is analyzed on ISBI 2016, ISBI 2017, ISIC 2018, and PH2 datasets, achieving an accuracy of 95.38%, 95.79%, 92.69%, and 98.70%, respectively. The classification performance is evaluated on the HAM10000 dataset and achieved an accuracy of 90.67%. To prove the effectiveness of the proposed methods, we present a comparison with the state-of-the-art techniques.
Full text
Available for:
IZUM, KILJ, NUK, PILJ, PNG, SAZU, UL, UM, UPUK