With deep learning (DL) development, EEG-based emotion recognition has attracted increasing attention. Diverse DL algorithms emerge and intelligently decode human emotion from EEG signals. However, ...the lack of a toolbox encapsulating these techniques hampers further the design, development, testing, implementation, and management of intelligent systems. To tackle this bottleneck, we propose a Python toolbox, TorchEEGEMO, which divides the workflow into five modules: datasets, transforms, model_selection, models, and trainers. Each module includes plug-and-play functions to construct and manage a stage in the workflow. Recognizing the frequent access to time windows of interest, we introduce a window-centric parallel input/output system, bolstering the efficiency of DL systems. We finally conduct extensive experiments to provide the benchmark results of supported modules. Our extensive experimental results demonstrate the versatility and applicability of TorchEEGEMO across various scenarios.
•The first deep learning toolbox towards EEG-based emotion recognition.•A workflow that divides the recognition system into five plug-and-play modules.•Built-in functions cover datasets, transformations, models, algorithms, and more.•A novel window-centric EEG I/O is to enhance system effectiveness.•Experiments demonstrate benchmark performance across various scenarios.
Full text
Available for:
GEOZS, IJS, IMTLJ, KILJ, KISLJ, NLZOH, NUK, OILJ, PNG, SAZU, SBCE, SBJE, UILJ, UL, UM, UPCLJ, UPUK, ZAGLJ, ZRSKP
This paper proposes a high-accuracy EEG-based schizophrenia (SZ) detection approach. Unlike comparable literature studies employing conventional machine learning algorithms, our method autonomously ...extracts the necessary features for network training from EEG recordings. The proposed model is a ten-layered CNN that contains a max pooling layer, a Global Average Pooling layer, four convolution layers, two dropout layers for overfitting prevention, and two fully connected layers. The efficiency of the suggested method was assessed using the ten-fold-cross validation technique and the EEG records of 14 healthy subjects and 14 SZ patients. The obtained mean accuracy score was 99.18 %. To confirm the high mean accuracy attained, we tested the model on unseen data with a near-perfect accuracy score (almost 100 %). In addition, the results we obtained outperform numerous other comparable works.
Full text
Available for:
GEOZS, IJS, IMTLJ, KILJ, KISLJ, NLZOH, NUK, OILJ, PNG, SAZU, SBCE, SBJE, UILJ, UL, UM, UPCLJ, UPUK, ZAGLJ, ZRSKP
Electroencephalography (EEG)-based brain-computer interfaces (BCIs), particularly those using motor-imagery (MI) data, have the potential to become groundbreaking technologies in both clinical and ...entertainment settings. MI data is generated when a subject imagines the movement of a limb. This paper reviews state-of-the-art signal processing techniques for MI EEG-based BCIs, with a particular focus on the feature extraction, feature selection and classification techniques used. It also summarizes the main applications of EEG-based BCIs, particularly those based on MI data, and finally presents a detailed discussion of the most prevalent challenges impeding the development and commercialization of EEG-based BCIs.
Full text
Available for:
IZUM, KILJ, NUK, PILJ, PNG, SAZU, UL, UM, UPUK
The underlying neural mechanisms underpinning the association between age-related hearing loss (ARHL) and dementia remain unclear. A limitation has been the lack of functional neuroimaging studies in ...ARHL cohorts to help clarify this relationship. In the present study, we investigated the neural correlates of feature binding in visual working memory with ARHL (controls = 14, mild HL = 21, and moderate or greater HL = 23). Participants completed a visual change detection task assessing feature binding while their neural activity was synchronously recorded via high-density electroencephalography. There was no difference in accuracy scores for ARHL groups compared to controls. There was increased electrophysiological activity in those with ARHL, particularly in components indexing the earlier stages of visual cognitive processing. This activity was more pronounced with more severe ARHL and was associated with maintained feature binding. Source space (sLORETA) analyses indicated greater activity in networks modulated by frontoparietal and temporal regions. Our results demonstrate there may be increased involvement of neurocognitive control networks to maintain lower-order neurocognitive processing disrupted by ARHL.
Full text
Available for:
GEOZS, IJS, IMTLJ, KILJ, KISLJ, NLZOH, NUK, OILJ, PNG, SAZU, SBCE, SBJE, UILJ, UL, UM, UPCLJ, UPUK, ZAGLJ, ZRSKP
•An efficient TCNet-Fusion model for MI-EEG classification is proposed.•1D convolutions are applied in temporal domain and channel-wise in order.•An image-like 2D representation is fed to the ...proposed model.•The model achieved 83.73 % accuracy in BCI Competition IV-2a.•The model achieved 94.41 % accuracy in High Gamma Dataset.
Motor imagery electroencephalography (MI-EEG) signals are generated when a person imagines a task without actually performing it. In recent studies, MI-EEG has been used in the rehabilitation process of paralyzed patients, therefore, decoding MI-EEG signals accurately is an important task, and it is difficult task due to the low signal-to-noise ratio and the variation of brain waves between subjects. Deep learning techniques such as the convolution neural network (CNN) have shown an impact in extracting meaningful features to improve the accuracy of classification. In this paper, we propose TCNet-Fusion, a fixed hyperparameter-based CNN model that utilizes multiple techniques, such as temporal convolutional networks (TCNs), separable convolution, depth-wise convolution, and the fusion of layers. This model outperforms other fixed hyperparameter-based CNN models while remaining similar to those that utilize variable hyperparameter networks, which are networks that change their hyperparameters based on each subject, resulting in higher accuracy than fixed networks. It also uses less memory than variable networks. The EEG signal undergoes two successive 1D convolutions, first along with the time domain, then channel-wise. Then, we obtain an image-like representation, which is fed to the main TCN. During experimentation, the model achieved a classification accuracy of 83.73 % on the four-class MI of the BCI Competition IV-2a dataset, and an accuracy of 94.41 % on the High Gamma Dataset.
Full text
Available for:
GEOZS, IJS, IMTLJ, KILJ, KISLJ, NLZOH, NUK, OILJ, PNG, SAZU, SBCE, SBJE, UILJ, UL, UM, UPCLJ, UPUK, ZAGLJ, ZRSKP
This paper considers the auditory attention detection (AAD) paradigm, where the goal is to determine which of two simultaneous speakers a person is attending to. The paradigm relies on recordings of ...the listener's brain activity, e.g., from electroencephalography (EEG). To perform AAD, decoded EEG signals are typically correlated with the temporal envelopes of the speech signals of the separate speakers. In this paper, we study how the inclusion of various degrees of auditory modelling in this speech envelope extraction process affects the AAD performance, where the best performance is found for an auditory-inspired linear filter bank followed by power law compression. These two modelling stages are computationally cheap, which is important for implementation in wearable devices, such as future neuro-steered auditory prostheses. We also introduce a more natural way to combine recordings (over trials and subjects) to train the decoder, which reduces the dependence of the algorithm on regularization parameters. Finally, we investigate the simultaneous design of the EEG decoder and the audio subband envelope recombination weights vector using either a norm-constrained least squares or a canonical correlation analysis, but conclude that this increases computational complexity without improving AAD performance.
For a brain-computer interface (BCI) system, a calibration procedure is required for each individual user before he/she can use the BCI. This procedure requires approximately 20-30 min to collect ...enough data to build a reliable decoder. It is, therefore, an interesting topic to build a calibration-free, or subject-independent, BCI. In this article, we construct a large motor imagery (MI)-based electroencephalography (EEG) database and propose a subject-independent framework based on deep convolutional neural networks (CNNs). The database is composed of 54 subjects performing the left- and right-hand MI on two different days, resulting in 21 600 trials for the MI task. In our framework, we formulated the discriminative feature representation as a combination of the spectral-spatial input embedding the diversity of the EEG signals, as well as a feature representation learned from the CNN through a fusion technique that integrates a variety of discriminative brain signal patterns. To generate spectral-spatial inputs, we first consider the discriminative frequency bands in an information-theoretic observation model that measures the power of the features in two classes. From discriminative frequency bands, spectral-spatial inputs that include the unique characteristics of brain signal patterns are generated and then transformed into a covariance matrix as the input to the CNN. In the process of feature representations, spectral-spatial inputs are individually trained through the CNN and then combined by a concatenation fusion technique. In this article, we demonstrate that the classification accuracy of our subject-independent (or calibration-free) model outperforms that of subject-dependent models using various methods common spatial pattern (CSP), common spatiospectral pattern (CSSP), filter bank CSP (FBCSP), and Bayesian spatio-spectral filter optimization (BSSFO).
The application of multisource information fusion in real-world scenarios is an emerging practice because it effectively uses consistent and complementary data to optimize decision-making. ...Dempster-Shafer (D-S) evidence theory is prevalent because it competently handles uncertainty problems by assigning basic probability assignments (BPAs) to multielement subsets. However, a counterintuitive result may be obtained when the evidence is highly conflicting. To overcome this flaw, this paper defines a new divergence measurement to quantify the differences between BPAs; we name this new metric the belief Rényi divergence. The belief Rényi divergence takes the number of possible hypotheses into consideration, which makes it a more rational and effective difference measurement in the realm of evidence theory. Additionally, some important properties of the belief Rényi divergence are extensively explored and proven, in which the belief Rényi divergence also connects to Kullback-Leibler divergence, Hellinger distance and χ2 divergence. Moreover, a novel multisource information fusion method is devised based on the proposed belief Rényi divergence and belief entropy. Our proposed belief Rényi divergence can efficiently model the differences between evidence, and the belief entropy is used to calculate the information volume of evidence. Thus, the proposed method can sufficiently exploit the relationships among evidence and the information volume of the evidence itself. Two case studies are illustrated to verify the effectiveness and practicality of the proposed method. Also, an experiment on an iris dataset classification is presented to verify the performance of the proposed method. In addition, an EEG data analysis application demonstrates that the proposed method can be effectively used in real-world applications.
Full text
Available for:
GEOZS, IJS, IMTLJ, KILJ, KISLJ, NLZOH, NUK, OILJ, PNG, SAZU, SBCE, SBJE, UILJ, UL, UM, UPCLJ, UPUK, ZAGLJ, ZRSKP