In this paper, we present a multimodal emotion recognition framework called EmotionMeter that combines brain waves and eye movements. To increase the feasibility and wearability of EmotionMeter in ...real-world applications, we design a six-electrode placement above the ears to collect electroencephalography (EEG) signals. We combine EEG and eye movements for integrating the internal cognitive states and external subconscious behaviors of users to improve the recognition accuracy of EmotionMeter. The experimental results demonstrate that modality fusion with multimodal deep neural networks can significantly enhance the performance compared with a single modality, and the best mean accuracy of 85.11% is achieved for four emotions (happy, sad, fear, and neutral). We explore the complementary characteristics of EEG and eye movements for their representational capacities and identify that EEG has the advantage of classifying happy emotion, whereas eye movements outperform EEG in recognizing fear emotion. To investigate the stability of EmotionMeter over time, each subject performs the experiments three times on different days. EmotionMeter obtains a mean recognition accuracy of 72.39% across sessions with the six-electrode EEG and eye movement features. These experimental results demonstrate the effectiveness of EmotionMeter within and between sessions.
A brain-computer interface (BCI) enables a user to communicate with a computer directly using brain signals. The most common noninvasive BCI modality, electroencephalogram (EEG), is sensitive to ...noise/artifact and suffers between-subject/within-subject nonstationarity. Therefore, it is difficult to build a generic pattern recognition model in an EEG-based BCI system that is optimal for different subjects, during different sessions, for different devices and tasks. Usually, a calibration session is needed to collect some training data for a new subject, which is time consuming and user unfriendly. Transfer learning (TL), which utilizes data or knowledge from similar or relevant subjects/sessions/devices/tasks to facilitate learning for a new subject/session/device/task, is frequently used to reduce the amount of calibration effort. This article reviews journal publications on TL approaches in EEG-based BCIs in the last few years, i.e., since 2016. Six paradigms and applications-motor imagery, event-related potentials, steady-state visual evoked potentials, affective BCIs, regression problems, and adversarial attacks-are considered. For each paradigm/application, we group the TL approaches into cross-subject/session, cross-device, and cross-task settings and review them separately. Observations and conclusions are made at the end of the article, which may point to future research directions.
To investigate critical frequency bands and channels, this paper introduces deep belief networks (DBNs) to constructing EEG-based emotion recognition models for three emotions: positive, neutral and ...negative. We develop an EEG dataset acquired from 15 subjects. Each subject performs the experiments twice at the interval of a few days. DBNs are trained with differential entropy features extracted from multichannel EEG data. We examine the weights of the trained DBNs and investigate the critical frequency bands and channels. Four different profiles of 4, 6, 9, and 12 channels are selected. The recognition accuracies of these four profiles are relatively stable with the best accuracy of 86.65%, which is even better than that of the original 62 channels. The critical frequency bands and channels determined by using the weights of trained DBNs are consistent with the existing observations. In addition, our experiment results show that neural signatures associated with different emotions do exist and they share commonality across sessions and individuals. We compare the performance of deep models with shallow models. The average accuracies of DBN, SVM, LR, and KNN are 86.08%, 83.99%, 82.70%, and 72.60%, respectively.
In this paper, we investigate stable patterns of electroencephalogram (EEG) over time for emotion recognition using a machine learning approach. Up to now, various findings of activated patterns ...associated with different emotions have been reported. However, their stability over time has not been fully investigated yet. In this paper, we focus on identifying EEG stability in emotion recognition. We systematically evaluate the performance of various popular feature extraction, feature selection, feature smoothing and pattern classification methods with the DEAP dataset and a newly developed dataset called SEED for this study. Discriminative Graph regularized Extreme Learning Machine with differential entropy features achieves the best average accuracies of 69.67 and 91.07 percent on the DEAP and SEED datasets, respectively. The experimental results indicate that stable patterns exhibit consistency across sessions; the lateral temporal areas activate more for positive emotions than negative emotions in beta and gamma bands; the neural patterns of neutral emotions have higher alpha responses at parietal and occipital sites; and for negative emotions, the neural patterns have significant higher delta responses at parietal and occipital sites and higher gamma responses at prefrontal sites. The performance of our emotion recognition models shows that the neural patterns are relatively stable within and between sessions.
Recently, emotion classification from EEG data has attracted much attention with the rapid development of dry electrode techniques, machine learning algorithms, and various real-world applications of ...brain–computer interface for normal people. Until now, however, researchers had little understanding of the details of relationship between different emotional states and various EEG features. To improve the accuracy of EEG-based emotion classification and visualize the changes of emotional states with time, this paper systematically compares three kinds of existing EEG features for emotion classification, introduces an efficient feature smoothing method for removing the noise unrelated to emotion task, and proposes a simple approach to tracking the trajectory of emotion changes with manifold learning. To examine the effectiveness of these methods introduced in this paper, we design a movie induction experiment that spontaneously leads subjects to real emotional states and collect an EEG data set of six subjects. From experimental results on our EEG data set, we found that (a) power spectrum feature is superior to other two kinds of features; (b) a linear dynamic system based feature smoothing method can significantly improve emotion classification accuracy; and (c) the trajectory of emotion changes can be visualized by reducing subject-independent features with manifold learning.
This is the first meta-analysis of the pooled prevalence of insomnia in the general population of China. A systematic literature search was conducted via the following databases: PubMed, PsycINFO, ...EMBASE and Chinese databases (China National Knowledge Interne (CNKI), WanFang Data and SinoMed). Statistical analyses were performed using the Comprehensive Meta-Analysis program. A total of 17 studies with 115,988 participants met the inclusion criteria for the analysis. The pooled prevalence of insomnia in China was 15.0% (95% Confidence interval CI: 12.1%-18.5%). No significant difference was found in the prevalence between genders or across time period. The pooled prevalence of insomnia in population with a mean age of 43.7 years and older (11.6%; 95% CI: 7.5%-17.6%) was significantly lower than in those with a mean age younger than 43.7 years (20.4%; 95% CI: 14.2%-28.2%). The prevalence of insomnia was significantly affected by the type of assessment tools (Q = 14.1, P = 0.001). The general population prevalence of insomnia in China is lower than those reported in Western countries but similar to those in Asian countries. Younger Chinese adults appear to suffer from more insomnia than older adults.
CRD 42016043620.
Celotno besedilo
Dostopno za:
DOBA, IZUM, KILJ, NUK, PILJ, PNG, SAZU, SIK, UILJ, UKNU, UL, UM, UPUK
A brain-computer interface (BCI) enables a user to communicate directly with a computer using only the central nervous system. An affective BCI (aBCI) monitors and/or regulates the emotional state of ...the brain, which could facilitate human cognition, communication, decision-making, and health. The last decade has witnessed rapid progress in aBCI research and applications, but there does not exist a comprehensive and up-to-date tutorial on aBCIs. This tutorial fills the gap. It introduces first the basic concepts of BCIs and then, in detail, the individual components in a closed-loop aBCI system, including signal acquisition, signal processing, feature extraction, emotion recognition, and brain stimulation. Next, it describes three representative applications of aBCIs, i.e., cognitive workload recognition, fatigue estimation, and depression diagnosis and treatment. Several challenges and opportunities in aBCI research and applications, including brain signal acquisition, emotion labeling, diversity and size of aBCI datasets, algorithm comparison, negative transfer in emotion recognition, and privacy protection and security of aBCIs, are also explained.
Cholesterol metabolism has been linked to immune functions, but the mechanisms by which cholesterol biosynthetic signaling orchestrates inflammasome activation remain unclear. Here, we have shown ...that NLRP3 inflammasome activation is integrated with the maturation of cholesterol master transcription factor SREBP2. Importantly, SCAP-SREBP2 complex endoplasmic reticulum-to-Golgi translocation was required for optimal activation of the NLRP3 inflammasome both in vitro and in vivo. Enforced cholesterol biosynthetic signaling by sterol depletion or statins promoted NLPR3 inflammasome activation. However, this regulation did not predominantly depend on changes in cholesterol homeostasis controlled by the transcriptional activity of SREBP2, but relied on the escort activity of SCAP. Mechanistically, NLRP3 associated with SCAP-SREBP2 to form a ternary complex which translocated to the Golgi apparatus adjacent to a mitochondrial cluster for optimal inflammasome assembly. Our study reveals that, in addition to controlling cholesterol biosynthesis, SCAP-SREBP2 also serves as a signaling hub integrating cholesterol metabolism with inflammation in macrophages.
Display omitted
•NLRP3 inflammasome activation couples SREBP2 maturation•SCAP-SREBP2 translocation and S1P are required for optimal NLRP3 inflammasome activity•SCAP escorts both NLRP3 and SREBP2 by forming a ternary complex•SCAP-SREBP2 inhibition protects mice from systemic inflammation
The metabolic-inflammatory crosstalk plays a key role in host defense against pathogens and inflammation. Guo and colleagues demonstrate that SCAP-SREBP2 complex integrates NLRP3 inflammasome activation and cholesterol biosynthetic signaling during inflammation.
Previous studies on emotion recognition from electroencephalography (EEG) mainly rely on single-channel-based feature extraction methods, which ignore the functional connectivity between brain ...regions. Hence, in this paper, we propose a novel emotion-relevant critical subnetwork selection algorithm and investigate three EEG functional connectivity network features: strength, clustering coefficient, and eigenvector centrality.
After constructing the brain networks by the correlations between pairs of EEG signals, we calculated critical subnetworks through the average of brain network matrices with the same emotion label to eliminate the weak associations. Then, three network features were conveyed to a multimodal emotion recognition model using deep canonical correlation analysis along with eye movement features. The discrimination ability of the EEG connectivity features in emotion recognition is evaluated on three public datasets: SEED, SEED-V, and DEAP.
. The experimental results reveal that the strength feature outperforms the state-of-the-art features based on single-channel analysis. The classification accuracies of multimodal emotion recognition are95.08±6.42%on the SEED dataset,84.51±5.11%on the SEED-V dataset, and85.34±2.90%and86.61±3.76%for arousal and valence on the DEAP dataset, respectively, which all achieved the best performance. In addition, the brain networks constructed with 18 channels achieve comparable performance with that of the 62-channel network and enable easier setups in real scenarios.
The EEG functional connectivity networks combined with emotion-relevant critical subnetworks selection algorithm we proposed is a successful exploration to excavate the information between channels.
Objective. Covert aspects of ongoing user mental states provide key context information for user-aware human computer interactions. In this paper, we focus on the problem of estimating the vigilance ...of users using EEG and EOG signals. Approach. The PERCLOS index as vigilance annotation is obtained from eye tracking glasses. To improve the feasibility and wearability of vigilance estimation devices for real-world applications, we adopt a novel electrode placement for forehead EOG and extract various eye movement features, which contain the principal information of traditional EOG. We explore the effects of EEG from different brain areas and combine EEG and forehead EOG to leverage their complementary characteristics for vigilance estimation. Considering that the vigilance of users is a dynamic changing process because the intrinsic mental states of users involve temporal evolution, we introduce continuous conditional neural field and continuous conditional random field models to capture dynamic temporal dependency. Main results. We propose a multimodal approach to estimating vigilance by combining EEG and forehead EOG and incorporating the temporal dependency of vigilance into model training. The experimental results demonstrate that modality fusion can improve the performance compared with a single modality, EOG and EEG contain complementary information for vigilance estimation, and the temporal dependency-based models can enhance the performance of vigilance estimation. From the experimental results, we observe that theta and alpha frequency activities are increased, while gamma frequency activities are decreased in drowsy states in contrast to awake states. Significance. The forehead setup allows for the simultaneous collection of EEG and EOG and achieves comparative performance using only four shared electrodes in comparison with the temporal and posterior sites.