Current research in affective neuroscience suggests that the emotional content of visual stimuli activates brain-body responses that could be critical to general health and physical disease. The aim ...of this study was to develop an integrated neurophysiological approach linking central and peripheral markers of nervous activity during the presentation of natural scenes in order to determine the temporal stages of brain processing related to the bodily impact of emotions. More specifically, whole head magnetoencephalogram (MEG) data and skin conductance response (SCR), a reliable autonomic marker of central activation, were recorded in healthy volunteers during the presentation of emotional (unpleasant and pleasant) and neutral pictures selected from the International Affective Picture System (IAPS). Analyses of event-related magnetic fields (ERFs) revealed greater activity at 180 ms in an occipitotemporal component for emotional pictures than for neutral counterparts. More importantly, these early effects of emotional arousal on cerebral activity were significantly correlated with later increases in SCR magnitude. For the first time, a neuromagnetic cortical component linked to a well-documented marker of bodily arousal expression of emotion, namely, the SCR, was identified and located. This finding sheds light on the time course of the brain-body interaction with emotional arousal and provides new insights into the neural bases of complex and reciprocal mind-body links.
Emotional facial expressions (EFE) are efficiently processed when both attention and gaze are focused on them. However, what kind of processing persists when EFE are neither the target of attention ...nor of gaze remains largely unknown. Consequently, in this experiment we investigated whether the implicit processing of faces displayed in far periphery could still be modulated by their emotional expression. Happy, fearful and neutral faces appeared randomly for 300ms at four peripheral locations of a panoramic screen (15 and 30° in the right and left visual fields). Reaction times and electrophysiological responses were recorded from 32 participants who had to categorize these faces according to their gender. A decrease of behavioral performance was specifically found for happy and fearful faces, probably because emotional content was automatically processed and interfered with information necessary to the task. A spatio-temporal principal component analysis of electrophysiological data confirmed an enhancement of early activity in occipito-temporal areas for emotional faces in comparison with neutral ones. Overall, these data show an implicit processing of EFE despite the strong decrease of visual performance with eccentricity. Therefore, the present research suggests that EFE could be automatically detected in peripheral vision, confirming the abilities of humans to process emotional saliency in very impoverished conditions of vision.
► A decrease of behavioral performance was found for fearful and happy faces. ► Early evoked components are enhanced by the emotional expression of faces. ► Emotional expression of face is implicitly processed in peripheral vision.
When we hear an emotional voice, does this alter how the brain perceives and evaluates a subsequent face? Here, we tested this question by comparing event-related potentials evoked by angry, sad, and ...happy faces following vocal expressions which varied in form (speech-embedded emotions, non-linguistic vocalizations) and emotional relationship (congruent, incongruent). Participants judged whether face targets were true exemplars of emotion (facial affect decision). Prototypicality decisions were more accurate and faster for congruent vs. incongruent faces and for targets that displayed happiness. Principal component analysis identified vocal context effects on faces in three distinct temporal factors: a posterior P200 (150–250 ms), associated with evaluating face typicality; a slow frontal negativity (200–750 ms) evoked by angry faces, reflecting enhanced attention to threatening targets; and the Late Positive Potential (LPP, 450–1000 ms), reflecting sustained contextual evaluation of intrinsic face meaning (with independent LPP responses in posterior and prefrontal cortex). Incongruent faces and faces primed by speech (compared to vocalizations) tended to increase demands on face perception at stages of structure-building (P200) and meaning integration (posterior LPP). The frontal LPP spatially overlapped with the earlier frontal negativity response; these components were functionally linked to expectancy-based processes directed towards the incoming face, governed by the form of a preceding vocal expression (especially for anger). Our results showcase differences in how vocalizations and speech-embedded emotion expressions modulate cortical operations for predicting (prefrontal) versus integrating (posterior) face meaning in light of contextual details.
•We investigated how the brain uses speech prosody to interpret sexual innuendos by measuring ERPs.•Utterances containing double entendre phrases, expressed in a sexual or neutral tone, were ...presented to 24 young adults.•Early registration of vocal cues guided listeners to detect the taboo meaning of double entendre phrases.•Sexual innuendo evoked a unique increased negativity response after 600 ms in the left prefrontal region.•Data establish the cerebral time course of the interpretation of sexual innuendo when vocal and linguistic cues interplay.
Speakers modulate their voice (prosody) to communicate non-literal meanings, such as sexual innuendo (She inspected his package this morning, where “package” could refer to a man’s penis). Here, we analyzed event-related potentials to illuminate how listeners use prosody to interpret sexual innuendo and what neurocognitive processes are involved. Participants listened to third-party statements with literal or ‘sexual’ interpretations, uttered in an unmarked or sexually evocative tone. Analyses revealed: 1) rapid neural differentiation of neutral vs. sexual prosody from utterance onset; (2) N400-like response differentiating contextually constrained vs. unconstrained utterances following the critical word (reflecting integration of prosody and word meaning); and (3) a selective increased negativity response to sexual innuendo around 600 ms after the critical word. Findings show that the brain quickly integrates prosodic and lexical-semantic information to form an impression of what the speaker is communicating, triggering a unique response to sexual innuendos, consistent with their high social relevance.
Due to the adaptive value of emotional situations, categorizing along the valence dimension may be supported by critical brain functions. The present study examined emotion–cognition relationships by ...focusing on the influence of an emotional categorization task on the cognitive processing induced by an oddball-like paradigm. Event-related potentials (ERPs) were recorded from subjects explicitly asked to categorize along the valence dimension (unpleasant, neutral or pleasant) deviant target pictures embedded in a train of standard stimuli. Late positivities evoked in response to the target pictures were decomposed into a P3a and a P3b and topographical differences were observed according to the valence content of the stimuli. P3a showed enhanced amplitudes at posterior sites in response to unpleasant pictures as compared to both neutral and pleasant pictures. This effect is interpreted as a negativity bias related to attentional processing. The P3b component was sensitive to the arousal value of the stimulation, with higher amplitudes at several posterior sites for both types of emotional pictures. Moreover, unpleasant pictures evoked smaller amplitudes than pleasant ones at fronto-central sites. Thus, the context updating process may be differentially modulated by the affective arousal and valence of the stimulus. The present study supports the assumption that, during an emotional categorization, the emotional content of the stimulus may modulate the reorientation of attention and the subsequent updating process in a specific way.
There is growing interest in characterizing the neural basis of music perception and, in particular, assessing how similar, or not, it is to that of speech. To further explore this question, we ...employed an EEG adaptation paradigm in which we compared responses to short sounds belonging to the same category, either speech (pseudo‐sentences) or music (piano or violin), depending on whether they were immediately preceded by a same‐ or different‐category sound. We observed a larger reduction in the N100 component magnitude in response to musical sounds when they were preceded by music (either the same or different instrument) than by speech. In contrast, the N100 amplitude was not affected by the preceding stimulus category in the case of speech. For P200 component, we observed a diminution of amplitude when speech sounds were preceded speech, compared to music. No such decrease was found when we compared the responses to music sounds. These differences in the processing of speech and music are consistent with the proposal that some degree of category selectivity for these two classes of complex stimuli already occurs at early stages of auditory processing, possibly subserved by partly separated neuronal populations.
We employed an adaptation paradigm to compare electrophysiological responses to sounds belonging to the same category or not (music and speech). We found larger reduction in amplitudes (adaptation) to sounds belonging to the same category than to sounds from different categories. These results suggest that some degree of category selectivity for these two classes of complex stimuli already occurs at early stages of auditory processing, possibly subserved by partly separated neuronal populations.
•We investigated whether emotional speech prosody influences emotional face scanning.•Results confirm effects of prosody emotional congruency on eye movements.•Vocal emotion cues could guide how ...humans process facial expressions.
Previous eye-tracking studies have found that listening to emotionally-inflected utterances guides visual behavior towards an emotionally congruent face (e.g., Rigoulot and Pell, 2012). Here, we investigated in more detail whether emotional speech prosody influences how participants scan and fixate specific features of an emotional face that is congruent or incongruent with the prosody. Twenty-one participants viewed individual faces expressing fear, sadness, disgust, or happiness while listening to an emotionally-inflected pseudo-utterance spoken in a congruent or incongruent prosody. Participants judged whether the emotional meaning of the face and voice were the same or different (match/mismatch). Results confirm that there were significant effects of prosody congruency on eye movements when participants scanned a face, although these varied by emotion type; a matching prosody promoted more frequent looks to the upper part of fear and sad facial expressions, whereas visual attention to upper and lower regions of happy (and to some extent disgust) faces was more evenly distributed. These data suggest ways that vocal emotion cues guide how humans process facial expressions in a way that could facilitate recognition of salient visual cues, to arrive at a holistic impression of intended meanings during interpersonal events.
•Timing abilities allow predicting “when” and “what” events are likely to occur.•We investigated auditory timing abilities in young and older adults.•ERPs reveal overall less efficient sensory gating ...in older adults.•P50 can serve as a marker of temporal predictability in young participants.
Timing abilities help organizing the temporal structure of events but are known to change systematically with age. Yet, how the neuronal signature of temporal predictability changes across the age span remains unclear. Younger (n = 21; 23.1 years) and older adults (n = 21; 68.5 years) performed an auditory oddball task, consisting of isochronous and random sound sequences. Results confirm an altered P50 response in the older compared to younger participants. P50 amplitudes differed between the isochronous and random temporal structures in younger, and for P200 in the older group. These results suggest less efficient sensory gating in older adults in both isochronous and random auditory sequences. N100 amplitudes were more negative for deviant tones. P300 amplitudes were parietally enhanced in younger, but not in older adults. In younger participants, the P50 results confirm that this component marks temporal predictability, indicating sensitive gating of temporally regular sound sequences.
Emotions can be conveyed through a variety of channels in the auditory domain, be it via music, non-linguistic vocalizations, or speech prosody. Moreover, recent studies suggest that expertise in one ...sound category can impact the processing of emotional sounds in other sound categories as they found that musicians process more efficiently emotional musical and vocal sounds than non-musicians. However, the neural correlates of these modulations, especially their time course, are not very well understood. Consequently, we focused here on how the neural processing of emotional information varies as a function of sound category and expertise of participants. Electroencephalogram (EEG) of 20 non-musicians and 17 musicians was recorded while they listened to vocal (speech and vocalizations) and musical sounds. The amplitude of EEG-oscillatory activity in the theta, alpha, beta, and gamma band was quantified and Independent Component Analysis (ICA) was used to identify underlying components of brain activity in each band. Category differences were found in theta and alpha bands, due to larger responses to music and speech than to vocalizations, and in posterior beta, mainly due to differential processing of speech. In addition, we observed greater activation in frontal theta and alpha for musicians than for non-musicians, as well as an interaction between expertise and emotional content of sounds in frontal alpha. The results reflect musicians’ expertise in recognition of emotion-conveying music, which seems to also generalize to emotional expressions conveyed by the human voice, in line with previous accounts of effects of expertise on musical and vocal sounds processing.
•We investigated effects of musical expertise on emotional sound processing.•EEG was recorded while musicians and non-musicians listened to emotional sounds.•We quantified oscillatory activity in 4 frequency bands (theta, alpha, beta, gamma).•Musicians showed greater activity than non-musicians in frontal theta and alpha.•Our data revealed similarities and differences of music and speech processing.
To explore how cultural immersion modulates emotion processing, this study examined how Chinese immigrants to Canada process multisensory emotional expressions, which were compared to existing data ...from two groups, Chinese and North Americans. Stroop and Oddball paradigms were employed to examine different stages of emotion processing. The Stroop task presented face-voice pairs expressing congruent/incongruent emotions and participants actively judged the emotion of one modality while ignoring the other. A significant effect of cultural immersion was observed in the immigrants' behavioral performance, which showed greater interference from to-be-ignored faces, comparable with what was observed in North Americans. However, this effect was absent in their N400 data, which retained the same pattern as the Chinese. In the Oddball task, where immigrants passively viewed facial expressions with/without simultaneous vocal emotions, they exhibited a larger visual MMN for faces accompanied by voices, again mirroring patterns observed in Chinese. Correlation analyses indicated that the immigrants' living duration in Canada was associated with neural patterns (N400 and visual mismatch negativity) more closely resembling North Americans. Our data suggest that in multisensory emotion processing, adopting to a new culture first leads to behavioral accommodation followed by alterations in brain activities, providing new evidence on human's neurocognitive plasticity in communication.