Emotional cues from different modalities have to be integrated during communication, a process that can be shaped by an individual’s cultural background. We explored this issue in 25 Chinese ...participants by examining how listening to emotional prosody in Mandarin influenced participants’ gazes at emotional faces in a modified visual search task. We also conducted a cross-cultural comparison between data of this study and that of our previous work in English-speaking Canadians using analogous methodology. In both studies, eye movements were recorded as participants scanned an array of four faces portraying fear, anger, happy, and neutral expressions, while passively listening to a pseudo-utterance expressing one of the four emotions (Mandarin utterance in this study; English utterance in our previous study). The frequency and duration of fixations to each face were analyzed during 5 seconds after the onset of faces, both during the presence of the speech (early time window) and after the utterance ended (late time window). During the late window, Chinese participants looked more frequently and longer at faces conveying congruent emotions as the speech, consistent with findings from English-speaking Canadians. Cross-cultural comparison further showed that Chinese, but not Canadians, looked more frequently and longer at angry faces, which may signal potential conflicts and social threats. We hypothesize that the socio-cultural norms related to harmony maintenance in the Eastern culture promoted Chinese participants’ heightened sensitivity to, and deeper processing of, angry cues, highlighting culture-specific patterns in how individuals scan their social environment during emotion processing.
The current study investigates whether some of the variation in h-production observed among Quebec francophone (QF) learners of English could follow from their at times assimilating /h/ to /ʁ/. In ...earlier research, we attributed variation exclusively to QFs developing an approximate (“fuzzy” or “murky”) representation of /h/ that is not fully reliable as a base for h-perception and production. Nonetheless, two previous studies observed via event-related potentials differences in QF perceptual ability, which may follow from the quality of the vowel used in the stimuli: /ɑ/ vs. /ʌ/ (detection vs. no detection of /h/). Before the vowel /ɑ/, /h/ exhibits phonetic properties that may allow it to be assimilated to and thus underlyingly represented as /ʁ/. If /h/ is at times subject to approximate representation (e.g., before /ʌ/) and at others captured as /ʁ/ (before /ɑ/), we would expect production of /h/ to reflect this representational distinction, with greater accuracy rates in items containing /ɑ/. Two-way ANOVAs and paired Bayesian
t-
tests on the reading-aloud data of 27 QFs, however, reveal no difference in h-production according to vowel type. We address the consequences of our findings, discussing notably why QFs have such enduring difficulty acquiring /h/ despite the feature spread glottis being available in their representational repertoire. We propose the presence of a Laryngeal Input Constraint that renders representations containing only a laryngeal feature highly marked. We also consider the possibility that, rather than having overcome this constraint, some highly advanced learners are “phonological zombies”: these learners become so adept at employing approximate representations in perception and production that they are indistinguishable from speakers with bona fide phonemic representations.
To understand how culture modulates on-line neural responses to social information, this study compared how individuals from two distinct cultural groups, English-speaking North Americans and ...Chinese, process emotional meanings of multi-sensory stimuli as indexed by both behaviour (accuracy) and event-related potential (N400) measures. In an emotional Stroop-like task, participants were presented face–voice pairs expressing congruent or incongruent emotions in conditions where they judged the emotion of one modality while ignoring the other (face or voice focus task). Results indicated that while both groups were sensitive to emotional differences between channels (with lower accuracy and higher N400 amplitudes for incongruent face–voice pairs), there were marked group differences in how intruding facial or vocal cues affected accuracy and N400 amplitudes, with English participants showing greater interference from irrelevant faces than Chinese. Our data illuminate distinct biases in how adults from East Asian versus Western cultures process socio-emotional cues, supplying new evidence that cultural learning modulates not only behaviour, but the neurocognitive response to different features of multi-channel emotion expressions.
•Chinese and English North Americans were compared in face–voice emotion perception.•The English was more susceptible to facial expressions than to emotional prosody.•The Chinese did not show significant differences between the two sensory channels.•These cross-group differences were observed in both behavioural accuracy and N400.
Interpersonal communication involves the processing of multimodal emotional cues, particularly facial expressions (visual modality) and emotional speech prosody (auditory modality) which can interact ...during information processing. Here, we investigated whether the implicit processing of emotional prosody systematically influences gaze behavior to facial expressions of emotion. We analyzed the eye movements of 31 participants as they scanned a visual array of four emotional faces portraying fear, anger, happiness, and neutrality, while listening to an emotionally-inflected pseudo-utterance (Someone migged the pazing) uttered in a congruent or incongruent tone. Participants heard the emotional utterance during the first 1250 milliseconds of a five-second visual array and then performed an immediate recall decision about the face they had just seen. The frequency and duration of first saccades and of total looks in three temporal windows (0-1250 ms, 1250-2500 ms, 2500-5000 ms) were analyzed according to the emotional content of faces and voices. Results showed that participants looked longer and more frequently at faces that matched the prosody in all three time windows (emotion congruency effect), although this effect was often emotion-specific (with greatest effects for fear). Effects of prosody on visual attention to faces persisted over time and could be detected long after the auditory information was no longer present. These data imply that emotional prosody is processed automatically during communication and that these cues play a critical role in how humans respond to related visual cues in the environment, such as facial expressions.
Celotno besedilo
Dostopno za:
DOBA, IZUM, KILJ, NUK, PILJ, PNG, SAZU, SIK, UILJ, UKNU, UL, UM, UPUK
Abstract During social interactions, listeners weigh the importance of linguistic and extra-linguistic speech cues (prosody) to infer the true intentions of the speaker in reference to what is ...actually said. In this study, we investigated what brain processes allow listeners to detect when a spoken compliment is meant to be sincere (true compliment) or not (“white lie”). Electroencephalograms of 29 participants were recorded while they listened to Question–Response pairs, where the response was expressed in either a sincere or insincere tone (e.g., “So, what did you think of my presentation?”/“I found it really interesting.”). Participants judged whether the response was sincere or not. Behavioral results showed that prosody could be effectively used to discern the intended sincerity of compliments. Analysis of temporal and spatial characteristics of event-related potentials (P200, N400, P600) uncovered significant effects of prosody on P600 amplitudes, which were greater in response to sincere versus insincere compliments. Using low resolution brain electromagnetic tomography (LORETA), we determined that the anatomical sources of this activity were likely located in the (left) insula, consistent with previous reports of insular activity in the perception of lies and concealments. These data extend knowledge of the neurocognitive mechanisms that permit context-appropriate inferences about speaker feelings and intentions during interpersonal communication.
Abstract Behavioural studies have used spatial cueing designs extensively to investigate emotional biases in individuals exhibiting clinical and sub-clinical anxiety. However, the neural processes ...underlying the generation of these biases remain largely unknown. In this study, people who scored unusually high or low on scales of social anxiety performed a spatial cueing task. They were asked to discriminate the orientation of arrows appearing at the location previously occupied by a lateralised cue (consisting of a face displaying an emotional or a neutral expression) or at the empty location. The results showed that the perceptual encoding of faces, indexed by P1, and mobilisation of attentional resources, reflected in P2 on occipital locations, were modulated by social anxiety. These modulations were directly linked to the social anxiety level but not to trait anxiety. By contrast, later cognitive stages and behavioural performances were not modulated by social anxiety, supporting the theory of dissociation between efficiency and effectiveness in anxiety.
► In peripheral vision, fearful faces induce shorter reaction times than neutral ones. ► Early and late evoked components are enhanced by the fearful facial expression. ► Despite lower acuity, the ...preferential processing of emotional information persists in periphery.
Many studies provided evidence that the emotional content of visual stimulations modulates behavioral performance and neuronal activity. Surprisingly, these studies were carried out using stimulations presented in the center of the visual field while the majority of visual events firstly appear in the peripheral visual field. In this study, we assessed the impact of the emotional facial expression of fear when projected in near and far periphery. Sixteen participants were asked to categorize fearful and neutral faces projected at four peripheral visual locations (15° and 30° of eccentricity in right and left sides of the visual field) while reaction times and event-related potentials (ERPs) were recorded. ERPs were analyzed by means of spatio-temporal principal component and baseline-to-peak methods. Behavioral data confirmed the decrease of performance with eccentricity and showed that fearful faces induced shorter reaction times than neutral ones. Electrophysiological data revealed that the spatial position and the emotional content of faces modulated ERPs components. In particular, the amplitude of N170 was enhanced by fearful facial expression. These findings shed light on how visual eccentricity modulates the processing of emotional faces and suggest that, despite impoverished visual conditions, the preferential neural coding of fearful expression of faces still persists in far peripheral vision. The emotional content of faces could therefore contribute to their foveal or attentional capture, like in social interactions.
Fragile X syndrome (FXS) is a neurodevelopmental genetic disorder causing cognitive and behavioural deficits. Repetition suppression (RS), a learning phenomenon in which stimulus repetitions result ...in diminished brain activity, has been found to be impaired in FXS. Alterations in RS have been associated with behavioural problems in FXS; however, relations between RS and intellectual functioning have not yet been elucidated.
EEG was recorded in 14 FXS participants and 25 neurotypical controls during an auditory habituation paradigm using repeatedly presented pseudowords. Non-phased locked signal energy was compared across presentations and between groups using linear mixed models (LMMs) in order to investigate RS effects across repetitions and brain areas and a possible relation to non-verbal IQ (NVIQ) in FXS. In addition, we explored group differences according to NVIQ and we probed the feasibility of training a support vector machine to predict cognitive functioning levels across FXS participants based on single-trial RS features.
LMM analyses showed that repetition effects differ between groups (FXS vs. controls) as well as with respect to NVIQ in FXS. When exploring group differences in RS patterns, we found that neurotypical controls revealed the expected pattern of RS between the first and second presentations of a pseudoword. More importantly, while FXS participants in the ≤ 42 NVIQ group showed no RS, the > 42 NVIQ group showed a delayed RS response after several presentations. Concordantly, single-trial estimates of repetition effects over the first four repetitions provided the highest decoding accuracies in the classification between the FXS participant groups.
Electrophysiological measures of repetition effects provide a non-invasive and unbiased measure of brain responses sensitive to cognitive functioning levels, which may be useful for clinical trials in FXS.
Current research in affective neuroscience suggests that the emotional content of visual stimuli activates brain-body responses that could be critical to general health and physical disease. The aim ...of this study was to develop an integrated neurophysiological approach linking central and peripheral markers of nervous activity during the presentation of natural scenes in order to determine the temporal stages of brain processing related to the bodily impact of emotions. More specifically, whole head magnetoencephalogram (MEG) data and skin conductance response (SCR), a reliable autonomic marker of central activation, were recorded in healthy volunteers during the presentation of emotional (unpleasant and pleasant) and neutral pictures selected from the International Affective Picture System (IAPS). Analyses of event-related magnetic fields (ERFs) revealed greater activity at 180 ms in an occipitotemporal component for emotional pictures than for neutral counterparts. More importantly, these early effects of emotional arousal on cerebral activity were significantly correlated with later increases in SCR magnitude. For the first time, a neuromagnetic cortical component linked to a well-documented marker of bodily arousal expression of emotion, namely, the SCR, was identified and located. This finding sheds light on the time course of the brain-body interaction with emotional arousal and provides new insights into the neural bases of complex and reciprocal mind-body links.