Recently, electroencephalogram-based emotion recognition has become crucial in enabling the Human-Computer Interaction (HCI) system to become more intelligent. Due to the outstanding applications of ...emotion recognition, e.g., person-based decision making, mind-machine interfacing, cognitive interaction, affect detection, feeling detection, etc., emotion recognition has become successful in attracting the recent hype of AI-empowered research. Therefore, numerous studies have been conducted driven by a range of approaches, which demand a systematic review of methodologies used for this task with their feature sets and techniques. It will facilitate the beginners as guidance towards composing an effective emotion recognition system. In this article, we have conducted a rigorous review on the state-of-the-art emotion recognition systems, published in recent literature, and summarized some of the common emotion recognition steps with relevant definitions, theories, and analyses to provide key knowledge to develop a proper framework. Moreover, studies included here were dichotomized based on two categories: i) deep learning-based, and ii) shallow machine learning-based emotion recognition systems. The reviewed systems were compared based on methods, classifier, the number of classified emotions, accuracy, and dataset used. An informative comparison, recent research trends, and some recommendations are also provided for future research directions.
Emotion recognition using Artificial Intelligence (AI) is a fundamental prerequisite to improve Human-Computer Interaction (HCI). Recognizing emotion from Electroencephalogram (EEG) has been globally ...accepted in many applications such as intelligent thinking, decision-making, social communication, feeling detection, affective computing, etc. Nevertheless, due to having too low amplitude variation related to time on EEG signal, the proper recognition of emotion from this signal has become too challenging. Usually, considerable effort is required to identify the proper feature or feature set for an effective feature-based emotion recognition system. To extenuate the manual human effort of feature extraction, we proposed a deep machine-learning-based model with Convolutional Neural Network (CNN). At first, the one-dimensional EEG data were converted to Pearson's Correlation Coefficient (PCC) featured images of channel correlation of EEG sub-bands. Then the images were fed into the CNN model to recognize emotion. Two protocols were conducted, namely, protocol-1 to identify two levels and protocol-2 to recognize three levels of valence and arousal that demonstrate emotion. We investigated that only the upper triangular portion of the PCC featured images reduced the computational complexity and size of memory without hampering the model accuracy. The maximum accuracy of 78.22% on valence and 74.92% on arousal were obtained using the internationally authorized DEAP dataset.
Display omitted
•EEG based emotion recognition model is proposed using Convolutional Neural Network architecture.•Pearson's Correlation Coefficients (PCC) of alpha, beta and gamma sub-bands are used.•A novel method focusing on lower computational complexity based on memory requirement and computational time.•Low, medium and high level of valence and arousal based emotion recognition model with PCC feature.
The significance of uterine contractions in facilitating the successful birth of fetuses is self-evident. Timely recognition of high-risk deliveries, coupled with the administration of appropriate ...medication, has emerged as a promising approach to address this concern. However, the quest for effective early diagnostic methods continues to present a challenge in the field. The objective of this study was to develop a fully automated methodology for the identification of both normal and premature deliveries using EHG signals. In this study, a freely accessible database was utilized, comprising 338 signals obtained from two distinct groups of pregnant women: those who delivered at term (281 records) and those who experienced preterm delivery (57 records). The methodology employed in this study is structured into three sequential steps. Firstly, contraction segments are extracted utilizing an amplitude modulation technique. Subsequently, a process is implemented to identify consistent contractions by correlating the extracted segments with the tocodynamometer (TOCO) signal. In this step, the consistency index is assessed. Lastly, features such as energy, contraction intensity, contraction duration, peak-to-peak amplitude, log detector, and Shannon entropy are extracted from each contractile activity segment, statistical analysis was conducted using a non-parametric Mann-Whitney U test to identify significant features, and a Random forest (RF) is employed for the classification and discrimination between term and preterm births. The findings of this study show that the average consistency Index (CCI) during pre-term conditions is 0.91, contrasting with a value of 0.9 during term conditions after the extraction of contraction segments. Moreover, our experimental research results display that the performance of RF can achieve an Accuracy of 89%, Sensitivity of 85.87%, and precision of 88.76%. Our results suggest that this simple and effective method can automatically recognize uterine contraction and differentiate between term and preterm EHG signals. This may pave the way for innovative applications in the prevention of preterm labor.
Functional near-infrared spectroscopy (fNIRS) is a relatively new imaging modality in the functional neuroimaging research arena. The fNIRS modality non-invasively investigates the change of blood ...oxygenation level in the human brain utilizing the transillumination technique. In the last two decades, the interest in this modality is gradually evolving for its real-time monitoring, relatively low-cost, radiation-less environment, portability, patient-friendliness, etc. Including brain-computer interface and functional neuroimaging research, this technique has some important application of clinical perspectives such as Alzheimer’s disease, schizophrenia, dyslexia, Parkinson’s disease, childhood disorders, post-neurosurgery dysfunction, attention, functional connectivity, and many more can be diagnosed as well as in some form of assistive modality in clinical approaches. Regarding the issue, this review article presents the current scopes of fNIRS in medical assistance, clinical decision making, and future perspectives. This article also covers a short history of fNIRS, fundamental theories, and significant outcomes reported by a number of scholarly articles. Since this review article is hopefully the first one that comprehensively explores the potential scopes of the fNIRS in a clinical perspective, we hope it will be helpful for the researchers, physicians, practitioners, current students of the functional neuroimaging field, and the related personnel for their further studies and applications.
A brain tumor is an uncontrolled malignant cell growth in the brain, which is denoted as one of the deadliest types of cancer in people of all ages. Early detection of brain tumors is needed to get ...proper and accurate treatment. Recently, deep learning technology has attained much attraction to the physicians for the diagnosis and treatment of brain tumors. This research presents a novel and effective brain tumor classification approach from MRIs utilizing AlexNet CNN for separating the dataset into training and test data along with extracting the features. The extracted features are then fed to BayesNet, sequential minimal optimization (SMO), Naïve Bayes (NB), and random forest (RF) classifiers for classifying brain tumors as no-tumor, glioma, meningioma, and pituitary tumors. To evaluate our model’s performance, we have utilized a publicly available Kaggle dataset. This paper demonstrates ROC, PRC, and cost curves for realizing classification performance of the models; also, performance evaluating parameters, such as accuracy, sensitivity, specificity, false positive rate, false negative rate, precision, f-measure, kappa statistics, MCC, ROC area, and PRC area, have been calculated for four testing options: the test data itself, cross-validation fold (CVF) 4, CVF 10, and percentage split (PS) 34% of the test data. We have achieved 88.75%, 98.15%, 86.25% and 100% of accuracy using the AlexNet CNN+BayesNet, AlexNet CNN+SMO, AlexNet CNN+NB, and AlexNet CNN+RF models, respectively, for the test data itself. The results imply that our approach is outstanding and very effective.
Electroencephalogram (EEG)-based cognitive load assessment is now an important assignment in psychological research. This type of research work is conducted by providing some mental task to the ...participants and their responses are counted through their EEG signal. In general assumption, it is considered that during different tasks, the cognitive workload is increased. This paper has investigated this specific idea and showed that the conventional hypothesis is not correct always. This paper showed that cognitive load can be varied according to the performance of the participants. In this paper, EEG data of 36 participants are taken against their resting and task (mental arithmetic) conditions. The features of the signal were extracted using the empirical mode decomposition (EMD) method and classified using the support vector machine (SVM) model. Based on the classification accuracy, some hypotheses are built upon the impact of subjects' performance on cognitive load. Based on some statistical consideration and graphical justification, it has been shown how the hypotheses are valid. This result will help to construct the machine learning-based model in predicting the cognitive load assessment more appropriately in a subject-independent approach.
Diabetes is one of the most rapidly spreading diseases in the world, resulting in an array of significant complications, including cardiovascular disease, kidney failure, diabetic retinopathy, and ...neuropathy, among others, which contribute to an increase in morbidity and mortality rate. If diabetes is diagnosed at an early stage, its severity and underlying risk factors can be significantly reduced. However, there is a shortage of labeled data and the occurrence of outliers or data missingness in clinical datasets that are reliable and effective for diabetes prediction, making it a challenging endeavor. Therefore, we introduce a newly labeled diabetes dataset from a South Asian nation (Bangladesh). In addition, we suggest an automated classification pipeline that includes a weighted ensemble of machine learning (ML) classifiers: Naive Bayes (NB), Random Forest (RF), Decision Tree (DT), XGBoost (XGB), and LightGBM (LGB). Grid search hyperparameter optimization is employed to tune the critical hyperparameters of these ML models. Furthermore, missing value imputation, feature selection, and K-fold cross-validation are included in the framework design. A statistical analysis of variance (ANOVA) test reveals that the performance of diabetes prediction significantly improves when the proposed weighted ensemble (DT + RF + XGB + LGB) is executed with the introduced preprocessing, with the highest accuracy of 0.735 and an area under the ROC curve (AUC) of 0.832. In conjunction with the suggested ensemble model, our statistical imputation and RF-based feature selection techniques produced the best results for early diabetes prediction. Moreover, the presented new dataset will contribute to developing and implementing robust ML models for diabetes prediction utilizing population-level data.
Left Ventricular Hypertrophy (LVH) is associated with cardiomyopathy and many other heart diseases. In this paper, the authors have proposed a combined Cornell-Sokolow (CCS) methodology to achieve ...significant improvement in detecting LVH by ECG/EKG with multi-domain analysis. Considering two famous voltage criteria simultaneously helps prevent the concealing of LVH in the patient's data. The advancement of multi-domain analysis gives access to reach the algorithm from any common and cost-effective platform. This multi-domain analysis consists of both image and signal processing of the ECG data. In a MATLAB environment, these processes are quite accurate and optimized. With the help of the CCS model, the outcome of this research is satisfactory, both in terms of feature detection and LVH observation.
This paper proposes a novel feature selection method utilizing Rényi min-entropy-based algorithm for achieving a highly efficient brain–computer interface (BCI). Usually, wavelet packet ...transformation (WPT) is extensively used for feature extraction from electro-encephalogram (EEG) signals. For the case of multiple-class problem, classification accuracy solely depends on the effective feature selection from the WPT features. In conventional approaches, Shannon entropy and mutual information methods are often used to select the features. In this work, we have shown that our proposed Rényi min-entropy-based approach outperforms the conventional methods for multiple EEG signal classification. The dataset of BCI competition-IV (contains 4-class motor imagery EEG signal) is used for this experiment. The data are preprocessed and separated as the classes and used for the feature extraction using WPT. Then, for feature selection Shannon entropy, mutual information, and Rényi min-entropy methods are applied. With the selected features, four-class motor imagery EEG signals are classified using several machine learning algorithms. The results suggest that the proposed method is better than the conventional approaches for multiple-class BCI.