Automatic facial coding (AFC) is a promising new research tool to efficiently analyze emotional facial expressions. AFC is based on machine learning procedures to infer emotion categorization from ...facial movements (i.e., Action Units). State-of-the-art AFC accurately classifies intense and prototypical facial expressions, whereas it is less accurate for non-prototypical and less intense facial expressions. A potential reason might be that AFC is typically trained with standardized and prototypical facial expression inventories. Because AFC would be useful to analyze less prototypical research material as well, we set out to determine the role of prototypicality in the training material. We trained established machine learning algorithms either with standardized expressions from widely used research inventories or with unstandardized emotional facial expressions obtained in a typical laboratory setting and tested them on identical or cross-over material. All machine learning models' accuracies were comparable when trained and tested with held-out dataset from the same dataset (acc. = 83.4% to 92.5%). Strikingly, we found a substantial drop in accuracies for models trained with the highly prototypical standardized dataset when tested in the unstandardized dataset (acc. = 52.8%; 69.8%). However, when they were trained with unstandardized expressions and tested with standardized datasets, accuracies held up (acc. = 82.7%; 92.5%). These findings demonstrate a strong impact of the training material's prototypicality on AFC's ability to classify emotional faces. Because AFC would be useful for analyzing emotional facial expressions in research or even naturalistic scenarios, future developments should include more naturalistic facial expressions for training. This approach will improve the generalizability of AFC to encode more naturalistic facial expressions and increase robustness for future applications of this promising technology.
Automatic facial coding (AFC) is a novel research tool to automatically analyze emotional facial expressions. AFC can classify emotional expressions with high accuracy in standardized picture ...inventories of intensively posed and prototypical expressions. However, classification of facial expressions of untrained study participants is more error prone. This discrepancy requires a direct comparison between these two sources of facial expressions. To this end, 70 untrained participants were asked to express joy, anger, surprise, sadness, disgust, and fear in a typical laboratory setting. Recorded videos were scored with a well-established AFC software (FaceReader, Noldus Information Technology). These were compared with AFC measures of standardized pictures from 70 trained actors (i.e., standardized inventories). We report the probability estimates of specific emotion categories and, in addition, Action Unit (AU) profiles for each emotion. Based on this, we used a novel machine learning approach to determine the relevant AUs for each emotion, separately for both datasets. First, misclassification was more frequent for some emotions of untrained participants. Second, AU intensities were generally lower in pictures of untrained participants compared to standardized pictures for all emotions. Third, although profiles of relevant AU overlapped substantially across the two data sets, there were also substantial differences in their AU profiles. This research provides evidence that the application of AFC is not limited to standardized facial expression inventories but can also be used to code facial expressions of untrained participants in a typical laboratory setting.
Our first impression of others is highly influenced by their facial appearance. However, the perception and evaluation of faces is not only guided by internal features such as facial expressions, but ...also highly dependent on contextual information such as secondhand information (verbal descriptions) about the target person. To investigate the time course of contextual influences on cortical face processing, event-related brain potentials were investigated in response to neutral faces, which were preceded by brief verbal descriptions containing cues of affective valence (negative, neutral, positive) and self-reference (self-related vs. other-related). ERP analysis demonstrated that early and late stages of face processing are enhanced by negative and positive as well as self-relevant descriptions, although faces per se did not differ perceptually. Affective ratings of the faces confirmed these findings. Altogether, these results demonstrate for the first time both on an electrocortical and behavioral level how contextual information modifies early visual perception in a top-down manner.
•Neutral faces were contextualized by preceding sentences•Face-evoked ERPs were recorded•Face-evoked ERPs are enhanced by affective and self-relevant descriptions•Contextual information alters early visual perception in a top-down manner
Decoding someone's facial expressions provides insights into his or her emotional experience. Recently, Automatic Facial Coding (AFC) software has been developed to provide measurements of emotional ...facial expressions. Previous studies provided first evidence for the sensitivity of such systems to detect facial responses in study participants. In the present experiment, we set out to generalise these results to affective responses as they can occur in variable social interactions. Thus, we presented facial expressions (happy, neutral, angry) and instructed participants (N = 64) to either actively mimic, to look at them passively (n = 21), or to inhibit their own facial reaction (n = 22). A video stream for AFC and an electromyogram (EMG) of the zygomaticus and corrugator muscles were registered continuously. In the mimicking condition, both AFC and EMG differentiated well between facial expressions in response to the different emotional pictures. In the passive viewing and in the inhibition condition AFC did not detect changes in facial expressions whereas EMG was still highly sensitive. Although only EMG is sensitive when participants intend to conceal their facial reactions, these data extend previous findings that Automatic Facial Coding is a promising tool for the detection of intense facial reaction.
Even more than in cognitive research applications, moving fMRI to the clinic and the drug development process requires the generation of stable and reliable signal changes. The performance ...characteristics of the fMRI paradigm constrain experimental power and may require different study designs (e.g., crossover vs. parallel groups), yet fMRI reliability characteristics can be strongly dependent on the nature of the fMRI task. The present study investigated both within-subject and group-level reliability of a combined three-task fMRI battery targeting three systems of wide applicability in clinical and cognitive neuroscience: an emotional (face matching), a motivational (monetary reward anticipation) and a cognitive (n-back working memory) task. A group of 25 young, healthy volunteers were scanned twice on a 3T MRI scanner with a mean test–retest interval of 14.6days. FMRI reliability was quantified using the intraclass correlation coefficient (ICC) applied at three different levels ranging from a global to a localized and fine spatial scale: (1) reliability of group-level activation maps over the whole brain and within targeted regions of interest (ROIs); (2) within-subject reliability of ROI-mean amplitudes and (3) within-subject reliability of individual voxels in the target ROIs. Results showed robust evoked activation of all three tasks in their respective target regions (emotional task=amygdala; motivational task=ventral striatum; cognitive task=right dorsolateral prefrontal cortex and parietal cortices) with high effect sizes (ES) of ROI-mean summary values (ES=1.11–1.44 for the faces task, 0.96–1.43 for the reward task, 0.83–2.58 for the n-back task). Reliability of group level activation was excellent for all three tasks with ICCs of 0.89–0.98 at the whole brain level and 0.66–0.97 within target ROIs. Within-subject reliability of ROI-mean amplitudes across sessions was fair to good for the reward task (ICCs=0.56–0.62) and, dependent on the particular ROI, also fair-to-good for the n-back task (ICCs=0.44–0.57) but lower for the faces task (ICC=−0.02–0.16). In conclusion, all three tasks are well suited to between-subject designs, including imaging genetics. When specific recommendations are followed, the n-back and reward task are also suited for within-subject designs, including pharmaco-fMRI. The present study provides task-specific fMRI reliability performance measures that will inform the optimal use, powering and design of fMRI studies using comparable tasks.
► Within-subject and group-level reliability of an fMRI battery were determined. ► The task battery comprised emotional, motivational and cognitive tasks. ► Reliability of group level activation was excellent for all three tasks. ► Within-subject reliability of ROI-mean amplitudes varied by task. ► Results inform the optimal use, powering and design of future fMRI studies.
Expectation and previous experience are both well established key mediators of placebo and nocebo effects. However, the investigation of their respective contribution to placebo and nocebo responses ...is rather difficult because most placebo and nocebo manipulations are contaminated by pre-existing treatment expectancies resulting from a learning history of previous medical interventions. To circumvent any resemblance to classical treatments, a purely psychological placebo-nocebo manipulation was established, namely, the “visual stripe pattern–induced modulation of pain.” To this end, experience and expectation regarding the effects of different visual cues (stripe patterns) on pain were varied across 3 different groups, with either only placebo instruction (expectation), placebo conditioning (experience), or both (expectation + experience) applied. Only the combined manipulation (expectation + experience) revealed significant behavioral and physiological placebo–nocebo effects on pain. Two subsequent experiments, which, in addition to placebo and nocebo cues, included a neutral control condition further showed that especially nocebo responses were more easily induced by this psychological placebo and nocebo manipulation. The results emphasize the great effect of psychological processes on placebo and nocebo effects. Particularly, nocebo effects should be addressed more thoroughly and carefully considered in clinical practice to prevent the accidental induction of side effects.
Even purely psychological interventions that lack any resemblance to classical pain treatments might alter subjective and physiological pain correlates. A manipulation of treatment expectation and actual treatment experience were mandatory to elicit this effect. Nocebo effects were especially induced, which indicated the necessity for prevention of accidental side effects besides exploitation of placebo responses.
•The authors introduced a purely psychological, non-pharmacological placebo-nocebo paradigm.•A combined manipulation of experience and expectation led to a significant modulation of pain.•Even abstract treatments might alter the perception of pain.•The authors' manipulation induced stronger nocebo than placebo responses.•The results demonstrate the need for further research on the prevention of nocebo effects.
In everyday life, multiple sensory channels jointly trigger emotional experiences and one channel may alter processing in another channel. For example, seeing an emotional facial expression and ...hearing the voice's emotional tone will jointly create the emotional experience. This example, where auditory and visual input is related to social communication, has gained considerable attention by researchers. However, interactions of visual and auditory emotional information are not limited to social communication but can extend to much broader contexts including human, animal, and environmental cues. In this article, we review current research on audiovisual emotion processing beyond face-voice stimuli to develop a broader perspective on multimodal interactions in emotion processing. We argue that current concepts of multimodality should be extended in considering an ecologically valid variety of stimuli in audiovisual emotion processing. Therefore, we provide an overview of studies in which emotional sounds and interactions with complex pictures of scenes were investigated. In addition to behavioral studies, we focus on neuroimaging, electro- and peripher-physiological findings. Furthermore, we integrate these findings and identify similarities or differences. We conclude with suggestions for future research.
Facial expressions provide insight into a person's emotional experience. To automatically decode these expressions has been made possible by tremendous progress in the field of computer vision. ...Researchers are now able to decode emotional facial expressions with impressive accuracy in standardized images of prototypical basic emotions. We tested the sensitivity of a well-established automatic facial coding software program to detect spontaneous emotional reactions in individuals responding to emotional pictures. We compared automatically generated scores for valence and arousal of the Facereader (FR; Noldus Information Technology) with the current psychophysiological gold standard of measuring emotional valence (Facial Electromyography, EMG) and arousal (Skin Conductance, SC). We recorded physiological and behavioral measurements of 43 healthy participants while they looked at pleasant, unpleasant, or neutral scenes. When viewing pleasant pictures, FR Valence and EMG were both comparably sensitive. However, for unpleasant pictures, FR Valence showed an expected negative shift, but the signal differentiated not well between responses to neutral and unpleasant stimuli, that were distinguishable with EMG. Furthermore, FR Arousal values had a stronger correlation with self-reported valence than with arousal while SC was sensitive and specifically associated with self-reported arousal. This is the first study to systematically compare FR measurement of spontaneous emotional reactions to standardized emotional images with established psychophysiological measurement tools. This novel technology has yet to make strides to surpass the sensitivity of established psychophysiological measures. However, it provides a promising new measurement technique for non-contact assessment of emotional responses.
Recently, we demonstrated that the peak-end memory bias, which is well established in the context of pain, can also be observed in anxiety: Retrospective evaluations of a frightening experience are ...worse when peak anxiety is experienced at the end of an episode. Here, we set out to conceptually replicate and extend this finding with rigorous experimental control in a threat of shock paradigm. We induced two intensity levels of anxiety by presenting visual cues that indicated different strengths of electric stimuli. Each of the 59 participants went through one of two conditions that only differed in the order of moderate and high threat phases. As a manipulation check, orbicularis-EMG to auditory startle probes, electrodermal activity, and state anxiety confirmed the effects of the specific threat exposure. Critically, after some time had passed, participants for whom exposure had ended with high threat reported more anxiety for the entire episode than those for whom it ended with moderate threat. Moreover, they ranked their experience as more aversive when compared to other unpleasant everyday experiences. This study overcomes several previous limitations and speaks to the generalizability of the peak-end bias. Most notably, the findings bear implications for exposure therapy in clinical anxiety.
•This research convincingly consolidates evidence for the peak-end bias in anxiety.•The ending of a frightening episode determines how it is evaluated retrospectively.•Inducing anxiety with threat of shock provided us with rigorous experimental control.•Physiological and self-report indices corroborate the graded induction of anxiety.•The findings bear relevant implications for exposure therapy in clinical anxiety.
Numerous studies have shown that humans automatically react with congruent facial reactions, i.e., facial mimicry, when seeing a vis-á-vis' facial expressions. The current experiment is the first ...investigating the neuronal structures responsible for differences in the occurrence of such facial mimicry reactions by simultaneously measuring BOLD and facial EMG in an MRI scanner. Therefore, 20 female students viewed emotional facial expressions (happy, sad, and angry) of male and female avatar characters. During picture presentation, the BOLD signal as well as M. zygomaticus major and M. corrugator supercilii activity were recorded simultaneously. Results show prototypical patterns of facial mimicry after correction for MR-related artifacts: enhanced M. zygomaticus major activity in response to happy and enhanced M. corrugator supercilii activity in response to sad and angry expressions. Regression analyses show that these congruent facial reactions correlate significantly with activations in the IFG, SMA, and cerebellum. Stronger zygomaticus reactions to happy faces were further associated to increased activities in the caudate, MTG, and PCC. Corrugator reactions to angry expressions were further correlated with the hippocampus, insula, and STS. Results are discussed in relation to core and extended models of the mirror neuron system (MNS).