New biomarkers are urgently needed for many brain disorders; for example, the diagnosis of mild traumatic brain injury (mTBI) is challenging as the clinical symptoms are diverse and nonspecific. EEG ...and MEG studies have demonstrated several population-level indicators of mTBI that could serve as objective markers of brain injury. However, deriving clinically useful biomarkers for mTBI and other brain disorders from EEG/MEG signals is hampered by the large inter-individual variability even across healthy people. Here, we used a multivariate machine-learning approach to detect mTBI from resting-state MEG measurements. To address the heterogeneity of the condition, we employed a normative modeling approach and modeled MEG signal features of individual mTBI patients as deviations with respect to the normal variation. To this end, a normative dataset comprising 621 healthy participants was used to determine the variation in power spectra across the cortex. In addition, we constructed normative datasets based on age-matched subsets of the full normative data. To discriminate patients from healthy control subjects, we trained support-vector-machine classifiers on the quantitative deviation maps for 25 mTBI patients and 20 controls not included in the normative dataset. The best performing classifier made use of the full normative data across the entire age and frequency ranges. This classifier was able to distinguish patients from controls with an accuracy of 79%. Inspection of the trained model revealed that low-frequency activity in the theta frequency band (4–8 Hz) is a significant indicator of mTBI, consistent with earlier studies. The results demonstrate the feasibility of using normative modeling of MEG data combined with machine learning to advance diagnosis of mTBI and identify patients that would benefit from treatment and rehabilitation. The current approach could be applied to a wide range of brain disorders, thus providing a basis for deriving MEG/EEG-based biomarkers.
Movie viewing allows human perception and cognition to be studied in complex, real-life-like situations in a brain-imaging laboratory. Previous studies with functional magnetic resonance imaging ...(fMRI) and with magneto- and electroencephalography (MEG and EEG) have demonstrated consistent temporal dynamics of brain activity across movie viewers. However, little is known about the similarities and differences of fMRI and MEG or EEG dynamics during such naturalistic situations.
We thus compared MEG and fMRI responses to the same 15-min black-and-white movie in the same eight subjects who watched the movie twice during both MEG and fMRI recordings. We analyzed intra- and intersubject voxel-wise correlations within each imaging modality as well as the correlation of the MEG envelopes and fMRI signals. The fMRI signals showed voxel-wise within- and between-subjects correlations up to r = 0.66 and r = 0.37, respectively, whereas these correlations were clearly weaker for the envelopes of band-pass filtered (7 frequency bands below 100 Hz) MEG signals (within-subjects correlation r < 0.14 and between-subjects r < 0.05). Direct MEG–fMRI voxel-wise correlations were unreliable. Notably, applying a spatial-filtering approach to the MEG data uncovered consistent canonical variates that showed considerably stronger (up to r = 0.25) between-subjects correlations than the univariate voxel-wise analysis. Furthermore, the envelopes of the time courses of these variates up to about 10 Hz showed association with fMRI signals in a general linear model. Similarities between envelopes of MEG canonical variates and fMRI voxel time-courses were seen mostly in occipital, but also in temporal and frontal brain regions, whereas intra- and intersubject correlations for MEG and fMRI separately were strongest only in the occipital areas.
In contrast to the conventional univariate analysis, the spatial-filtering approach was able to uncover associations between the MEG envelopes and fMRI time courses, shedding light on the similarities of hemodynamic and electromagnetic brain activities during movie viewing.
During joint actions, people typically adjust their own actions according to the ongoing actions of the partner, which implies that the interaction modulates the behavior of both participants. ...However, the neural substrates of such mutual adaptation are still poorly understood. Here, we set out to identify the kinematics-related brain activity of leaders and followers performing hand actions.
Sixteen participants as 8 pairs performed continuous, repetitive right-hand opening and closing actions with ~3-s cycles in a leader–follower task. Subjects played each role for 5min. Magnetoencephalographic (MEG) brain signals were recorded simultaneously from both partners with a dual-MEG setup, and hand kinematics was monitored with accelerometers. Modulation index, a cross-frequency coupling measure, was computed between the hand acceleration and the MEG signals in the alpha (7–13Hz) and beta (13–25Hz) bands.
Regardless of the participants' role, the strongest alpha and beta modulations occurred bilaterally in the sensorimotor cortices. In the occipital region, beta modulation was stronger in followers than leaders; these oscillations originated, according to beamformer source reconstructions, in early visual cortices. Despite differences in the modulation indices, alpha and beta power did not differ between the conditions.
Our results indicate that the beta modulation in the early visual cortices depends on the subject's role as a follower or leader in a joint hand-action task. This finding could reflect the different strategies employed by leaders and followers in integrating kinematics-related visual information to control their own actions.
•Pairs of subjects performed hand movements as a leader and follower in a dual-MEG setup.•Alpha and beta powers did not differ between followers and leaders.•Alpha and beta modulation indices were strongest at bilateral sensorimotor cortices.•Beta modulation was stronger in leaders than followers in the early visual cortex.•The role might influence the integration of kinematics-related visual information to control one's own movements.
Beamformers are applied for estimating spatiotemporal characteristics of neuronal sources underlying measured MEG/EEG signals. Several MEG analysis toolboxes include an implementation of a linearly ...constrained minimum-variance (LCMV) beamformer. However, differences in implementations and in their results complicate the selection and application of beamformers and may hinder their wider adoption in research and clinical use. Additionally, combinations of different MEG sensor types (such as magnetometers and planar gradiometers) and application of preprocessing methods for interference suppression, such as signal space separation (SSS), can affect the results in different ways for different implementations. So far, a systematic evaluation of the different implementations has not been performed. Here, we compared the localization performance of the LCMV beamformer pipelines in four widely used open-source toolboxes (MNE-Python, FieldTrip, DAiSS (SPM12), and Brainstorm) using datasets both with and without SSS interference suppression.
We analyzed MEG data that were i) simulated, ii) recorded from a static and moving phantom, and iii) recorded from a healthy volunteer receiving auditory, visual, and somatosensory stimulation. We also investigated the effects of SSS and the combination of the magnetometer and gradiometer signals. We quantified how localization error and point-spread volume vary with the signal-to-noise ratio (SNR) in all four toolboxes.
When applied carefully to MEG data with a typical SNR (3–15 dB), all four toolboxes localized the sources reliably; however, they differed in their sensitivity to preprocessing parameters. As expected, localizations were highly unreliable at very low SNR, but we found high localization error also at very high SNRs for the first three toolboxes while Brainstorm showed greater robustness but with lower spatial resolution. We also found that the SNR improvement offered by SSS led to more accurate localization.
•Different beamformer implementations are reported to sometimes yield differing source estimates for the same MEG data.•We compared beamformers in four major open-source MEG analysis toolboxes.•All toolboxes provide consistent and accurate results with 3–15-dB input SNR.•However, localization errors are high at very high input SNR for the tested scalar beamformers.•We discuss the critical differences between the implementations.
To successfully interact with others, people automatically mimic their actions and feelings. Yet, neurobehavioral studies of interaction are few because of lacking conceptual and experimental ...frameworks. A recent study introduced an elegantly simple motor task to unravel implicit interpersonal behavioral synchrony and brain function during face-to-face interaction.
Selective auditory attention enables filtering of relevant acoustic information from irrelevant. Specific auditory responses, measurable by magneto- and electroencephalography (MEG/EEG), are known to ...be modulated by attention to the evoking stimuli. However, such attention effects have typically been studied in unnatural conditions (e.g. during dichotic listening of pure tones) and have been demonstrated mostly in averaged auditory evoked responses. To test how reliably we can detect the attention target from unaveraged brain responses, we recorded MEG data from 15 healthy subjects that were presented with two human speakers uttering continuously the words "Yes" and "No" in an interleaved manner. The subjects were asked to attend to one speaker. To investigate which temporal and spatial aspects of the responses carry the most information about the target of auditory attention, we performed spatially and temporally resolved classification of the unaveraged MEG responses using a support vector machine. Sensor-level decoding of the responses to attended vs. unattended words resulted in a mean accuracy of Formula: see text (N = 14) for both stimulus words. The discriminating information was mostly available 200-400 ms after the stimulus onset. Spatially-resolved source-level decoding indicated that the most informative sources were in the auditory cortices, in both the left and right hemisphere. Our result corroborates attention modulation of auditory evoked responses and shows that such modulations are detectable in unaveraged MEG responses at high accuracy, which could be exploited e.g. in an intuitive brain-computer interface.
Facial expressions are important for humans in communicating emotions to the conspecifics and enhancing interpersonal understanding. Many muscles producing facial expressions in humans are also found ...in domestic dogs, but little is known about how humans perceive dog facial expressions, and which psychological factors influence people's perceptions. Here, we asked 34 observers to rate the valence, arousal, and the six basic emotions (happiness, sadness, surprise, disgust, fear, and anger/aggressiveness) from images of human and dog faces with Pleasant, Neutral and Threatening expressions. We investigated how the subjects' personality (the Big Five Inventory), empathy (Interpersonal Reactivity Index) and experience of dog behavior affect the ratings of dog and human faces. Ratings of both species followed similar general patterns: human subjects classified dog facial expressions from pleasant to threatening very similarly to human facial expressions. Subjects with higher emotional empathy evaluated Threatening faces of both species as more negative in valence and higher in anger/aggressiveness. More empathetic subjects also rated the happiness of Pleasant humans but not dogs higher, and they were quicker in their valence judgments of Pleasant human, Threatening human and Threatening dog faces. Experience with dogs correlated positively with ratings of Pleasant and Neutral dog faces. Personality also had a minor effect on the ratings of Pleasant and Neutral faces in both species. The results imply that humans perceive human and dog facial expression in a similar manner, and the perception of both species is influenced by psychological factors of the evaluators. Especially empathy affects both the speed and intensity of rating dogs' emotional facial expressions.
Current knowledge about the precise timing of visual input to the cortex relies largely on spike timings in monkeys and evoked-response latencies in humans. However, quantifying the activation onset ...does not unambiguously describe the timing of stimulus-feature-specific information processing. Here, we investigated the information content of the early human visual cortical activity by decoding low-level visual features from single-trial magnetoencephalographic (MEG) responses. MEG was measured from nine healthy subjects as they viewed annular sinusoidal gratings (spanning the visual field from 2 to 10° for a duration of 1 s), characterized by spatial frequency (0.33 cycles/degree or 1.33 cycles/degree) and orientation (45° or 135°); gratings were either static or rotated clockwise or anticlockwise from 0 to 180°. Time-resolved classifiers using a 20 ms moving window exceeded chance level at 51 ms (the later edge of the window) for spatial frequency, 65 ms for orientation, and 98 ms for rotation direction. Decoding accuracies of spatial frequency and orientation peaked at 70 and 90 ms, respectively, coinciding with the peaks of the onset evoked responses. Within-subject time-insensitive pattern classifiers decoded spatial frequency and orientation simultaneously (mean accuracy 64%, chance 25%) and rotation direction (mean 82%, chance 50%). Classifiers trained on data from other subjects decoded the spatial frequency (73%), but not the orientation, nor the rotation direction. Our results indicate that unaveraged brain responses contain decodable information about low-level visual features already at the time of the earliest cortical evoked responses, and that representations of spatial frequency are highly robust across individuals.