Image processing plays a major role in neurologists' clinical diagnosis in the medical field. Several types of imagery are used for diagnostics, tumor segmentation, and classification. Magnetic ...resonance imaging (MRI) is favored among all modalities due to its noninvasive nature and better representation of internal tumor information. Indeed, early diagnosis may increase the chances of being lifesaving. However, the manual dissection and classification of brain tumors based on MRI is vulnerable to error, time‐consuming, and formidable task. Consequently, this article presents a deep learning approach to classify brain tumors using an MRI data analysis to assist practitioners. The recommended method comprises three main phases: preprocessing, brain tumor segmentation using k‐means clustering, and finally, classify tumors into their respective categories (benign/malignant) using MRI data through a finetuned VGG19 (i.e., 19 layered Visual Geometric Group) model. Moreover, for better classification accuracy, the synthetic data augmentation concept i s introduced to increase available data size for classifier training. The proposed approach was evaluated on BraTS 2015 benchmarks data sets through rigorous experiments. The results endorse the effectiveness of the proposed strategy and it achieved better accuracy compared to the previously reported state of the art techniques.
Following preprocessing, ROI extracted using K‐means clustering & finetune VGG19 model for tumor classification (benign/malignant) applied, accuracy improved using synthetic data augmentation.
Brain tumor is one of the most dreadful natures of cancer and caused a huge number of deaths among kids and adults from the past few years. According to WHO standard, the 700,000 humans are being ...with a brain tumor and around 86,000 are diagnosed since 2019. While the total number of deaths due to brain tumors is 16,830 since 2019 and the average survival rate is 35%. Therefore, automated techniques are needed to grade brain tumors precisely from MRI scans. In this work, a new deep learning‐based method is proposed for microscopic brain tumor detection and tumor type classification. A 3D convolutional neural network (CNN) architecture is designed at the first step to extract brain tumor and extracted tumors are passed to a pretrained CNN model for feature extraction. The extracted features are transferred to the correlation‐based selection method and as the output, the best features are selected. These selected features are validated through feed‐forward neural network for final classification. Three BraTS datasets 2015, 2017, and 2018 are utilized for experiments, validation, and accomplished an accuracy of 98.32, 96.97, and 92.67%, respectively. A comparison with existing techniques shows the proposed design yields comparable accuracy.
3D convolutional neural network (CNN) architecture is proposed for tumor extraction. The pretrained VGG19 CNN model is utilized for feature extraction. Correlation‐based along FNN‐based best features are selected. Results are validated for segmentation and classification steps.
Facial emotion recognition (FER) is an emerging and significant research area in the pattern recognition domain. In daily life, the role of non-verbal communication is significant, and in overall ...communication, its involvement is around 55% to 93%. Facial emotion analysis is efficiently used in surveillance videos, expression analysis, gesture recognition, smart homes, computer games, depression treatment, patient monitoring, anxiety, detecting lies, psychoanalysis, paralinguistic communication, detecting operator fatigue and robotics. In this paper, we present a detailed review on FER. The literature is collected from different reputable research published during the current decade. This review is based on conventional machine learning (ML) and various deep learning (DL) approaches. Further, different FER datasets for evaluation metrics that are publicly available are discussed and compared with benchmark results. This paper provides a holistic review of FER using traditional ML and DL methods to highlight the future gap in this domain for new researchers. Finally, this review work is a guidebook and very helpful for young researchers in the FER area, providing a general understating and basic knowledge of the current state-of-the-art methods, and to experienced researchers looking for productive directions for future work.
In various fields, the internet of things (IoT) gains a lot of popularity due to its autonomous sensors operations with the least cost. In medical and healthcare applications, the IoT devices develop ...an ecosystem to sense the medical conditions of the patients' such as blood pressure, oxygen level, heartbeat, temperature, etc. and take appropriate actions on an emergency basis. Using it, the healthcare-related data of patients is transmitted towards the remote users and medical centers for post-analysis. Different solutions have been proposed using Wireless Body Area Network (WBAN) to monitor the medical status of the patients based on low powered biosensor nodes, however, preventing increased energy consumption and communication costs are demanding and interesting problems. The issue of unbalanced energy consumption between biosensor nodes degrades the timely delivery of the patient's information to remote centers and gives a negative impact on the medical system. Moreover, the sensitive data of the patient is transmitting over the insecure Internet and prone to vulnerable security threats. Therefore, data privacy and integrity from malicious traffic are another challenging research issue for medical applications. This research article aims to a proposed secure and energy-efficient framework using Internet of Medical Things (IoMT) for e-healthcare (SEF-IoMT), which primary objective is to decrease the communication overhead and energy consumption between biosensors while transmitting the healthcare data on a convenient manner, and the other hand, it also secures the medical data of the patients against unauthentic and malicious nodes to improve the network privacy and integrity. The simulated results exhibit that the proposed framework improves the performance of medical systems for network throughput by 18%, packets loss rate by 44%, end-to-end delay by 26%, energy consumption by 29%, and link breaches by 48% than other states of the art solutions.
Acute Leukemia is a life‐threatening disease common both in children and adults that can lead to death if left untreated. Acute Lymphoblastic Leukemia (ALL) spreads out in children's bodies rapidly ...and takes the life within a few weeks. To diagnose ALL, the hematologists perform blood and bone marrow examination. Manual blood testing techniques that have been used since long time are often slow and come out with the less accurate diagnosis. This work improves the diagnosis of ALL with a computer‐aided system, which yields accurate result by using image processing and deep learning techniques. This research proposed a method for the classification of ALL into its subtypes and reactive bone marrow (normal) in stained bone marrow images. A robust segmentation and deep learning techniques with the convolutional neural network are used to train the model on the bone marrow images to achieve accurate classification results. Experimental results thus obtained and compared with the results of other classifiers Naïve Bayesian, KNN, and SVM. Experimental results reveal that the proposed method achieved 97.78% accuracy. The obtained results exhibit that the proposed approach could be used as a tool to diagnose Acute Lymphoblastic Leukemia and its sub‐types that will definitely assist pathologists.
This research proposed a method for the classification of Acute Lymphoblastic Leukemia (ALL) into its subtypes and reactive bone marrow (Normal) in stained bone marrow images using deep learning techniques with convolutional neural networks.
COVID‐19 has impacted the world in many ways, including loss of lives, economic downturn and social isolation. COVID‐19 was emerged due to the SARS‐CoV‐2 that is highly infectious pandemic. Every ...country tried to control the COVID‐19 spread by imposing different types of lockdowns. Therefore, there is an urgent need to forecast the daily confirmed infected cases and deaths in different types of lockdown to select the most appropriate lockdown strategies to control the intensity of this pandemic and reduce the burden in hospitals. Currently are imposed three types of lockdown (partial, herd, complete) in different countries. In this study, three countries from every type of lockdown were studied by applying time‐series and machine learning models, named as random forests, K‐nearest neighbors, SVM, decision trees (DTs), polynomial regression, Holt winter, ARIMA, and SARIMA to forecast daily confirm infected cases and deaths due to COVID‐19. The models' accuracy and effectiveness were evaluated by error based on three performance criteria. Actually, a single forecasting model could not capture all data sets' trends due to the varying nature of data sets and lockdown types. Three top‐ranked models were used to predict the confirmed infected cases and deaths, the outperformed models were also adopted for the out‐of‐sample prediction and obtained very close results to the actual values of cumulative infected cases and deaths due to COVID‐19. This study has proposed the auspicious models for forecasting and the best lockdown strategy to mitigate the causalities of COVID‐19.
Optimized data set model is proposed to predict 10 thinspace
days ahead of lung infection and death cases due to COVID‐19. The model predicted results very close to the actual values. The best policy to control infection and death cases was the Herd lockdown strategy.
Human action recognition (HAR) has gained much attention in the last few years due to its enormous applications including human activity monitoring, robotics, visual surveillance, to name but a few. ...Most of the previously proposed HAR systems have focused on using hand-crafted images features. However, these features cover limited aspects of the problem and show performance degradation on a large and complex datasets. Therefore, in this work, we propose a novel HAR system which is based on the fusion of conventional hand-crafted features using histogram of oriented gradients (HoG) and deep features. Initially, human silhouette is extracted with the help of saliency-based method - implemented in two phases. In the first phase, motion and geometric features are extracted from the selected channel, whilst, second phase calculates the Chi-square distance between the extracted and threshold-based minimum distance features. Afterwards, extracted deep CNN and hand-crafted features are fused to generate a resultant vector. Moreover, to cope with the curse of dimensionality, an entropy-based feature selection technique is also proposed to identify the most discriminant features for classification using multi-class support vector machine (M-SVM). All the simulations are performed on five publicly available benchmark datasets including Weizmann, UCF11 (YouTube), UCF Sports, IXMAS, and UT-Interaction. A comparative evaluation is also presented to show that our proposed model achieves superior performances in comparison to a few exiting methods.
•Motion and Geometric features are extracted for human flow estimation and silhouette extraction.•Deep CNN and hand crafted features are fused through parallel approach.•Entropy-controlled Chi-square approach is proposed for best features selection.•Experiments are performed on several well-known datasets.
In digital mammography, finding accurate breast profile segmentation of women's mammogram is considered a challenging task. The existence of the pectoral muscle may mislead the diagnosis of cancer ...due to its high-level similarity to breast body. In addition, some other challenges due to manifestation of the breast body pectoral muscle in the mammogram data include inaccurate estimation of the density level and assessment of the cancer cell. The discrete differentiation operator has been proven to eliminate the pectoral muscle before the analysis processing.
We propose a novel approach to remove the pectoral muscle in terms of the mediolateral-oblique observation of a mammogram using a discrete differentiation operator. This is used to detect the edges boundaries and to approximate the gradient value of the intensity function. Further refinement is achieved using a convex hull technique. This method is implemented on dataset provided by MIAS and 20 contrast enhanced digital mammographic images.
To assess the performance of the proposed method, visual inspections by radiologist as well as calculation based on well-known metrics are observed. For calculation of performance metrics, the given pixels in pectoral muscle region of the input scans are calculated as ground truth.
Our approach tolerates an extensive variety of the pectoral muscle geometries with minimum risk of bias in breast profile than existing techniques.
Celotno besedilo
Dostopno za:
DOBA, IZUM, KILJ, NUK, PILJ, PNG, SAZU, SIK, UILJ, UKNU, UL, UM, UPUK
Alzheimer’s disease (AD) is an incurable neurodegenerative disorder accounting for 70%–80% dementia cases worldwide. Although, research on AD has increased in recent years, however, the complexity ...associated with brain structure and functions makes the early diagnosis of this disease a challenging task. Resting-state functional magnetic resonance imaging (rs-fMRI) is a neuroimaging technology that has been widely used to study the pathogenesis of neurodegenerative diseases. In literature, the computer-aided diagnosis of AD is limited to binary classification or diagnosis of AD and MCI stages. However, its applicability to diagnose multiple progressive stages of AD is relatively under-studied. This study explores the effectiveness of rs-fMRI for multi-class classification of AD and its associated stages including CN, SMC, EMCI, MCI, LMCI, and AD. A longitudinal cohort of resting-state fMRI of 138 subjects (25 CN, 25 SMC, 25 EMCI, 25 LMCI, 13 MCI, and 25 AD) from Alzheimer’s Disease Neuroimaging Initiative (ADNI) is studied. To provide a better insight into deep learning approaches and their applications to AD classification, we investigate ResNet-18 architecture in detail. We consider the training of the network from scratch by using single-channel input as well as performed transfer learning with and without fine-tuning using an extended network architecture. We experimented with residual neural networks to perform AD classification task and compared it with former research in this domain. The performance of the models is evaluated using precision, recall, f1-measure, AUC and ROC curves. We found that our networks were able to significantly classify the subjects. We achieved improved results with our fine-tuned model for all the AD stages with an accuracy of 100%, 96.85%, 97.38%, 97.43%, 97.40% and 98.01% for CN, SMC, EMCI, LMCI, MCI, and AD respectively. However, in terms of overall performance, we achieved state-of-the-art results with an average accuracy of 97.92% and 97.88% for off-the-shelf and fine-tuned models respectively. The Analysis of results indicate that classification and prediction of neurodegenerative brain disorders such as AD using functional magnetic resonance imaging and advanced deep learning methods is promising for clinical decision making and have the potential to assist in early diagnosis of AD and its associated stages.
An entity's existence in an image can be depicted by the activity instantiation vector from a group of neurons (called capsule). Recently, multi-layered capsules, called CapsNet, have proven to be ...state-of-the-art for image classification tasks. This research utilizes the prowess of this algorithm to detect pneumonia from chest X-ray (CXR) images. Here, an entity in the CXR image can help determine if the patient (whose CXR is used) is suffering from pneumonia or not. A simple model of capsules (also known as Simple CapsNet) has provided results comparable to best Deep Learning models that had been used earlier. Subsequently, a combination of convolutions and capsules is used to obtain two models that outperform all models previously proposed. These models-Integration of convolutions with capsules (ICC) and Ensemble of convolutions with capsules (ECC)-detect pneumonia with a test accuracy of 95.33% and 95.90%, respectively. The latter model is studied in detail to obtain a variant called EnCC, where n = 3, 4, 8, 16. Here, the E4CC model works optimally and gives test accuracy of 96.36%. All these models had been trained, validated, and tested on 5857 images from Mendeley.