When the sensory–motor integration system is malfunctioning provokes a wide variety of neurological disorders, which in many cases cannot be treated with conventional medication, or via existing ...therapeutic technology. A brain–computer interface (BCI) is a tool that permits to reintegrate the sensory–motor loop, accessing directly to brain information. A potential, promising and quite investigated application of BCI has been in the motor rehabilitation field. It is well-known that motor deficits are the major disability wherewith the worldwide population lives. Therefore, this paper aims to specify the foundation of motor rehabilitation BCIs, as well as to review the recent research conducted so far (specifically, from 2007 to date), in order to evaluate the suitability and reliability of this technology. Although BCI for post-stroke rehabilitation is still in its infancy, the tendency is towards the development of implantable devices that encompass a BCI module plus a stimulation system.
•BCIs permit to reintegrate the sensory–motor loop by accessing to brain information.•Motor imagery based BCIs seem to be an effective system for an early rehabilitation.•This technology does not need remaining motor activity and promotes neuroplasticity.•BCI for rehabilitation tends towards implantable devices plus stimulation systems.
Socio-emotional impairments are among the diagnostic criteria for autism spectrum disorder (ASD), but the actual knowledge has substantiated both altered and intact emotional prosodies recognition. ...Here, a Bayesian framework of perception is considered suggesting that the oversampling of sensory evidence would impair perception within highly variable environments. However, reliable hierarchical structures for spectral and temporal cues would foster emotion discrimination by autistics.
Event-related spectral perturbations (ERSP) extracted from electroencephalographic (EEG) data indexed the perception of anger, disgust, fear, happiness, neutral, and sadness prosodies while listening to speech uttered by (a) human or (b) synthesized voices characterized by reduced volatility and variability of acoustic environments. The assessment of mechanisms for perception was extended to the visual domain by analyzing the behavioral accuracy within a non-social task in which dynamics of precision weighting between bottom-up evidence and top-down inferences were emphasized. Eighty children (mean 9.7 years old; standard deviation 1.8) volunteered including 40 autistics. The symptomatology was assessed at the time of the study via the Autism Diagnostic Observation Schedule, Second Edition, and parents' responses on the Autism Spectrum Rating Scales. A mixed within-between analysis of variance was conducted to assess the effects of group (autism versus typical development), voice, emotions, and interaction between factors. A Bayesian analysis was implemented to quantify the evidence in favor of the null hypothesis in case of non-significance. Post hoc comparisons were corrected for multiple testing.
Autistic children presented impaired emotion differentiation while listening to speech uttered by human voices, which was improved when the acoustic volatility and variability of voices were reduced. Divergent neural patterns were observed from neurotypicals to autistics, emphasizing different mechanisms for perception. Accordingly, behavioral measurements on the visual task were consistent with the over-precision ascribed to the environmental variability (sensory processing) that weakened performance. Unlike autistic children, neurotypicals could differentiate emotions induced by all voices.
This study outlines behavioral and neurophysiological mechanisms that underpin responses to sensory variability. Neurobiological insights into the processing of emotional prosodies emphasized the potential of acoustically modified emotional prosodies to improve emotion differentiation by autistics.
BioMed Central ISRCTN Registry, ISRCTN18117434. Registered on September 20, 2020.
Emotional content is particularly salient, but situational factors such as cognitive load may disturb the attentional prioritization towards affective stimuli and interfere with their processing. In ...this study, 31 autistic and 31 typically developed children volunteered to assess their perception of affective prosodies via event-related spectral perturbations of neuronal oscillations recorded by electroencephalography under attentional load modulations induced by Multiple Object Tracking or neutral images. Although intermediate load optimized emotion processing by typically developed children, load and emotion did not interplay in children with autism. Results also outlined impaired emotional integration emphasized in theta, alpha and beta oscillations at early and late stages, and lower attentional ability indexed by the tracking capacity. Furthermore, both tracking capacity and neuronal patterns of emotion perception during task were predicted by daily-life autistic behaviors. These findings highlight that intermediate load may encourage emotion processing in typically developed children. However, autism aligns with impaired affective processing and selective attention, both insensitive to load modulations. Results were discussed within a Bayesian perspective that suggests atypical updating in precision between sensations and hidden states, towards poor contextual evaluations. For the first time, implicit emotion perception assessed by neuronal markers was integrated with environmental demands to characterize autism.
•Chronic neuropathic pain has a considerable gap in characterization.•Neuropathic pain has mostly been analyzed in EEG through linear methodologies.•The central nervous system undergoing pain is a ...dynamic and unpredictable system.•We compare linear and nonlinear methodologies to classify pain.•Neuropathic pain severities are more significantly differentiated with approximate entropy as compared to absolute band power in the different neuronal frequency bands.
Chronic neuropathic pain (NP) is a chronic pain condition that severely impacts a patient's life. Pain management has proved to be inefficient due to a lack of a simple clinical tool that may identify and monitor NP. A low-cost, noninvasive tool that provides relevant information on NP is the electroencephalogram (EEG). However, the commonly used linear EEG features have proved to be limited in characterizing NP pathophysiology. This study sought to determine whether nonlinear EEG features such as approximate entropy (ApEn) would better differentiate pain severity than absolute band power.
A non-parametric statistical approach based on the Brief Pain Inventory (BPI), along with linear and nonlinear EEG features, is proposed in this study. For this purpose, thirty-six chronic NP patients were recruited, and 22 channels were registered. Additionally, a control database of 13 participants with no NP was used as a reference, where 19 channels were registered. For both groups, EEG was recorded for 10 min in a resting state: 5 min with eyes open (EO) and 5 min with eyes closed (EC). Absolute band power and ApEn EEG features in the five clinical frequency bands (delta, theta, alpha, beta, and gamma) were estimated for all channels in both groups. As a result, 220-dimensional and 190-dimensional feature vectors were obtained for experimental and control classes respectively. For the experimental class, NP patients were grouped according to their BPI evaluation in three groups: low, moderate, and high pain. Finally, feature vectors were compared between groups using Kruskal Wallis and post-hoc Dunn's tests.
ApEn revealed significant statistical difference (p <=0.0001) in most frequency bands and conditions among the groups. In contrast, power had less significant differences between groups, particularly with EO. Furthermore, NP groups were notably clustered using only ApEn in theta, alpha, and beta bands.
The results indicate that ApEn effectively characterizes the different severities of chronic NP rather than the commonly used linear features. ApEn and other nonlinear techniques (e.g., spectral entropy, Shannon entropy) might be a more suitable methodology to monitor chronic NP experience.
Binaural beats (BB) consist of two slightly distinct auditory frequencies (one in each ear), which are differentiated with clinical electroencephalographic (EEG) bandwidths, namely, delta, theta, ...alpha, beta, or gamma. This auditory stimulation has been widely used to module brain rhythms and thus inducing the mental condition associated with the EEG bandwidth in use. The aim of this research was to investigate whether personalized BB (specifically those within theta and beta EEG bands) improve brain entrainment. Personalized BB consisted of pure tones with a carrier tone of 500 Hz in the left ear together with an adjustable frequency in the right ear that was defined for theta BB (since
f
c
for theta EEG band was 4.60 Hz ± 0.70 SD) and beta BB (since
f
c
for beta EEG band was 18.42 Hz ± 2.82 SD). The adjustable frequencies were estimated for each participant in accordance with their heart rate by applying the Brain-Body Coupling Theorem postulated by Klimesch. To achieve this aim, 20 healthy volunteers were stimulated with their personalized theta and beta BB for 20 min and their EEG signals were collected with 22 channels. EEG analysis was based on the comparison of power spectral density among three mental conditions: (1) theta BB stimulation, (2) beta BB stimulation, and (3) resting state. Results showed larger absolute power differences for both BB stimulation sessions than resting state on bilateral temporal and parietal regions. This power change seems to be related to auditory perception and sound location. However, no significant differences were found between theta and beta BB sessions when it was expected to achieve different brain entrainments, since theta and beta BB induce relaxation and readiness, respectively. In addition, relative power analysis (theta BB/resting state) revealed alpha band desynchronization in the parieto-occipital region when volunteers listened to theta BB, suggesting that participants felt uncomfortable. In conclusion, neural resynchronization was met with both personalized theta and beta BB, but no different mental conditions seemed to be achieved.
Socio-emotional impairments are key symptoms of Autism Spectrum Disorders. This work proposes to analyze the neuronal activity related to the discrimination of emotional prosodies in autistic ...children (aged 9 to 11-year-old) as follows. Firstly, a database for single words uttered in Mexican Spanish by males, females, and children will be created. Then, optimal acoustic features for emotion characterization will be extracted, followed of a cubic kernel function Support Vector Machine (SVM) in order to validate the speech corpus. As a result, human-specific acoustic properties of emotional voice signals will be identified. Secondly, those identified acoustic properties will be modified to synthesize the recorded human emotional voices. Thirdly, both human and synthesized utterances will be used to study the electroencephalographic correlate of affective prosody processing in typically developed and autistic children. Finally, and on the basis of the outcomes, synthesized voice-enhanced environments will be created to develop an intervention based on social-robot and Social Story
for autistic children to improve affective prosodies discrimination. This protocol has been registered at BioMed Central under the following number: ISRCTN18117434.
Acoustic characterizations of different locations are necessary to obtain relevant information on their behavior, particularly in the case of places that have not been fully understood or which ...purpose is still unknown since they are from cultures that no longer exist. Acoustic measurements were conducted in the archaeological zone of Edzna to obtain useful information to better understand the customs and practices of its past inhabitants. The information obtained from these acoustic measurements is presented in a dataset, which includes measurements taken at 32 points around the entire archaeological zone, with special attention given to the Main Plaza, the Great Acropolis, and the Little Acropolis. Two recording systems were used for this purpose: a microphone and a binaural head. As a result, a measurement database with the following characteristics was obtained: it comprises a total of 32 measurement points with 4 different sound source positions. In total, there are 297 files divided into separate folders. The sampling frequency used was 96 kHz, and the files are in mat format.
Fault diagnosis in high-speed machining centers (HSM) is critical in manufacturing systems, since early detection saves a substantial amount of time and money. It is known that 42% of failures in ...these centers occur in rotatory machineries, such as spindles, in which, the bearings are fundamental elements for effective operation. Nowadays, there are several machine- and deep-learning methods to diagnose the faults. To improve the performance of those traditional machine-learning tools, a deep-learning network that works on raw signals, which do not require previous analysis, has been proposed. The 1D Convolutional Neural Network (CNN) proposed model showed great capacity of adapting to three types of configurations and three different databases, despite a training set with a smaller number of categories. The network still detected faults at early damage stages. Additionally, the low computational cost shows the Deep-Learning Neural Network’s (DLNN) suitability for real-time applications in industry. The proposed structure reached a precision of 99%; real-time processing was around 8 ms per signal, and standard deviation of repeatability was 0.25%.
The present database contains brain activity of subjective tinnitus sufferers at identifying their sound tinnitus. The main objective of this database is to provide spontaneous ...Electroencephalographic (EEG) activity at rest, and evoked EEG activity when tinnitus sufferers attempt to identify their sound tinnitus among 54 tinnitus sound examples. For the database, 37 volunteers were recruited: 15 ones without tinnitus (Control Group – CG), and 22 ones with tinnitus (Tinnitus Group – TG). For EEG recording, 30 channels were used to record two conditions: 1) basal condition, where the volunteer remained in a state of rest with the open eyes for two minutes; and 2) active condition, where the volunteer must have identified his/her sound stimulus by pressing a key. For the active condition, a sound-tinnitus library was generated in accordance with the most typical acoustic properties of tinnitus. The library consisted in ten pure tones (250 Hz, 500 Hz, 1 kHz, 2 kHz, 3 kHz, 3.5 kHz, 4 kHz, 6 kHz, 8 kHz, 10 kHz), a White Noise (WN), a Narrow Band noise-High frequencies (NBH, 4 kHz–10 kHz), a Narrow Band noise-Medium frequencies (NBM,1 kHz–4 kHz), a Narrow-Band noise Low frequencies (NBL, 250 Hz–1 kHz), ten pure tones combined with WN, ten pure tones superimposed with NBH, ten tones with NBM and ten pure tones combined with NBL. In total, 54 sound-tinnitus were applied for both groups. In the case of CG, volunteers must have identified a sound at 3.5 kHz. In addition to EEG information, a csv-file with audiometric and psychoacoustic information of volunteers is provided. For TG, this information refers to: 1) hearing level, 2) type of tinnitus, 3) tinnitus frequency, 4) tinnitus perception, 5) Hospital Anxiety and Depression Scale (HADS) and 6) Tinnitus Functional Index (TFI). For CG, the information refers to: 1) hearing level, and 2) HADS.