There is good reason to believe that gaze direction and facial displays of emotion share an information value as signals of approach or avoidance. The combination of these cues in the analysis of ...social communication, however, has been a virtually neglected area of inquiry. Two studies were conducted to test the prediction that direct gaze would facilitate the processing of facially communicated approach-oriented emotions (e.g., anger and joy), whereas averted gaze would facilitate the processing of facially communicated avoidance-oriented emotions (e.g., fear and sadness). The results of both studies confirmed the central hypothesis and suggest that gaze direction and facial expression are combined in the processing of emotionally relevant facial information.
Abstract
Social vision research, which examines, in part, how humans visually perceive social stimuli, is well-positioned to improve understandings of social inequality. However, social vision ...research has rarely prioritized the perspectives of marginalized group members. We offer a theoretical argument for diversifying understandings of social perceptual processes by centering marginalized perspectives. We examine (a) how social vision researchers frame their research questions and who these framings prioritize and (b) how perceptual processes (person perception; people perception; perception of social objects) are linked to group membership and thus comprehensively understanding these processes necessitates attention to marginalized perceivers. We discuss how social vision research translates into theoretical advances and to action for reducing negative intergroup consequences (e.g., prejudice). The purpose of this article is to delineate how prioritizing marginalized perspectives in social vision research could develop novel questions, bridge theoretical gaps, and elevate social vision’s translational impact to improve outcomes for marginalized groups.
Public Abstract
Social vision research is a subfield of psychology and vision science which examines how people visually perceive social stimuli and what the downstream consequences of these perceptions are. Social vision work includes, for example, examination of how White people visually perceive racial minorities and how these perceptions lead to social categorizations of racial minorities as outgroups, and therefore contribute to behaviors such as stereotyping and prejudice. Social vision research has rarely prioritized the perspectives of marginalized group members. It therefore cannot fully explain the contributions of perception to intergroup relations, which are necessarily bidirectional. We offer a theoretical argument for diversifying understandings of social perceptual processes by centering marginalized perspectives to understand how people with marginalized identities see their social worlds. We believe that prioritizing these marginalized perspectives has the potential to contribute to the development of a psychological science with heightened capacity to improve the well-being of people with marginalized identities.
The ability to infer others' thoughts, intentions, and feelings is regarded as uniquely human. Over the last few decades, this remarkable ability has captivated the attention of philosophers, ...primatologists, clinical and developmental psychologists, anthropologists, social psychologists, and cognitive neuroscientists. Most would agree that the capacity to reason about others' mental states is innately prepared, essential for successful human social interaction. Whether this ability is culturally tuned, however, remains entirely uncharted on both the behavioral and neural levels. Here we provide the first behavioral and neural evidence for an intracultural advantage (better performance for same- vs. other-culture) in mental state decoding in a sample of native Japanese and white American participants. We examined the neural correlates of this intracultural advantage using fMRI, revealing greater bilateral posterior superior temporal sulci recruitment during same- versus other-culture mental state decoding in both cultural groups. These findings offer preliminary support for cultural consistency in the neurological architecture subserving high-level mental state reasoning, as well as its differential recruitment based on cultural group membership.
Polling the Face Rule, Nicholas O; Ambady, Nalini; Adams, Reginald B ...
Journal of personality and social psychology,
01/2010, Letnik:
98, Številka:
1
Journal Article
Recenzirano
Odprti dostop
Previous work has shown that individuals agree across cultures on the traits that they infer from faces. Previous work has also shown that inferences from faces can be predictive of important ...outcomes within cultures. The current research merges these two lines of work. In a series of cross-cultural studies, the authors asked American and Japanese participants to provide naïve inferences of traits from the faces of U.S. political candidates (Studies 1 and 3) and Japanese political candidates (Studies 2 and 4). Perceivers showed high agreement in their ratings of the faces, regardless of culture, and both sets of judgments were predictive of an important ecological outcome (the percentage of votes that each candidate received in the actual election). The traits predicting electoral success differed, however, depending on the targets' culture. Thus, when American and Japanese participants were asked to provide explicit inferences of how likely each candidate would be to win an election (Studies 3-4), judgments were predictive only for same-culture candidates. Attempts to infer the electoral success for the foreign culture showed evidence of self-projection. Therefore, perceivers can reliably infer predictive information from faces but require knowledge about the target's culture to make these predictions accurately.
•Top-down predictive processes directly influence visual perception.•Predictions incorporate many cognitive processes to produce information-rich signals.•We demonstrate this in the healthy brain and ...in neuropsychiatry.•Cognitive penetration is a complementary framework for understanding visual perception.
It is argued that during ongoing visual perception, the brain is generating top-down predictions to facilitate, guide and constrain the processing of incoming sensory input. Here we demonstrate that these predictions are drawn from a diverse range of cognitive processes, in order to generate the richest and most informative prediction signals. This is consistent with a central role for cognitive penetrability in visual perception. We review behavioural and mechanistic evidence that indicate a wide spectrum of domains—including object recognition, contextual associations, cognitive biases and affective state—that can directly influence visual perception. We combine these insights from the healthy brain with novel observations from neuropsychiatric disorders involving visual hallucinations, which highlight the consequences of imbalance between top-down signals and incoming sensory information. Together, these lines of evidence converge to indicate that predictive penetration, be it cognitive, social or emotional, should be considered a fundamental framework that supports visual perception.
Humans are arguably innately prepared to comprehend others’ emotional expressions from subtle body movements. If robots or computers can be empowered with this capability, a number of robotic ...applications become possible. Automatically recognizing human bodily expression in unconstrained situations, however, is daunting given the incomplete understanding of the relationship between emotional expressions and body movements. The current research, as a multidisciplinary effort among computer and information sciences, psychology, and statistics, proposes a scalable and reliable crowdsourcing approach for collecting in-the-wild perceived emotion data for computers to learn to recognize body languages of humans. To accomplish this task, a large and growing annotated dataset with 9876 video clips of body movements and 13,239 human characters, named Body Language Dataset (BoLD), has been created. Comprehensive statistical analysis of the dataset revealed many interesting insights. A system to model the emotional expressions based on bodily movements, named Automated Recognition of Bodily Expression of Emotion (ARBEE), has also been developed and evaluated. Our analysis shows the effectiveness of Laban Movement Analysis (LMA) features in characterizing arousal, and our experiments using LMA features further demonstrate computability of bodily expression. We report and compare results of several other baseline methods which were developed for action recognition based on two different modalities, body skeleton and raw image. The dataset and findings presented in this work will likely serve as a launchpad for future discoveries in body language understanding that will enable future robots to interact and collaborate more effectively with humans.
It has long been understood that culture shapes individuals' behavior, but how this is accomplished in the human brain has remained largely unknown. To examine this, we made use of a well-established ...cross-cultural difference in behavior: American culture tends to reinforce dominant behavior whereas, conversely, Japanese culture tends to reinforce subordinate behavior. In 17 Americans and 17 Japanese individuals, we assessed behavioral tendencies towards dominance versus subordination and measured neural responses using fMRI during the passive viewing of stimuli related to dominance and subordination. In Americans, dominant stimuli selectively engaged the caudate nucleus, bilaterally, and the medial prefrontal cortex (mPFC), whereas these were selectively engaged by subordinate stimuli in Japanese. Correspondingly, Americans self-reported a tendency towards more dominant behavior whereas Japanese self-reported a tendency towards more subordinate behavior. Moreover, activity in the right caudate and mPFC correlated with behavioral tendencies towards dominance versus subordination, such that stronger responses in the caudate and mPFC to dominant stimuli were associated with more dominant behavior and stronger responses in the caudate and mPFC to subordinate stimuli were associated with more subordinate behavior. The findings provide a first demonstration that culture can flexibly shape functional activity in the mesolimbic reward system, which in turn may guide behavior.
Research has largely neglected the effects of gaze direction cues on the perception of facial expressions of emotion. It was hypothesized that when gaze direction matches the underlying behavioral ...intent (approach-avoidance) communicated by an emotional expression, the perception of that emotion would be enhanced (i.e., shared signal hypothesis). Specifically, the authors expected that (a) direct gaze would enhance the perception of approach-oriented emotions (anger and joy) and (b) averted eye gaze would enhance the perception of avoidance-oriented emotions (fear and sadness). Three studies supported this hypothesis. Study 1 examined emotional trait attributions made to neutral faces. Study 2 examined ratings of ambiguous facial blends of anger and fear. Study 3 examined the influence of gaze on the perception of highly prototypical expressions.
For clear and unambiguous social
categories, person perception occurs quite accurately from
minimal cues. This article addresses the perception of an
ambiguous
social category (male sexual
...orientation) from minimal cues. Across 5 studies, the
authors examined individuals' actual and self-assessed
accuracy when judging male sexual orientation from faces and
facial features. Although participants were able to make
accurate judgments from multiple facial features (i.e.,
hair, the eyes, and the mouth area), their perceived
accuracy was calibrated with their actual accuracy only when
making judgments based on hairstyle, a controllable feature.
These findings provide evidence that suggests different
processes for extracting social category information during
perception: explicit judgments based on obvious cues
(hairstyle) and intuitive judgments based on nonobvious cues
(information from the eyes and mouth area). Differences in
the accuracy of judgments based on targets' controllability
and perceivers' awareness of cues provides insight into the
processes underlying intuitive predictions and intuitive
judgments.
Abstract
Face ensemble coding is the perceptual ability to create a quick and overall impression of a group of faces, triggering social and behavioral motivations towards other people (approaching ...friendly people or avoiding an angry mob). Cultural differences in this ability have been reported, such that Easterners are better at face ensemble coding than Westerners are. The underlying mechanism has been attributed to differences in processing styles, with Easterners allocating attention globally, and Westerners focusing on local parts. However, the remaining question is how such default attention mode is influenced by salient information during ensemble perception. We created visual displays that resembled a real-world social setting in which one individual in a crowd of different faces drew the viewer's attention while the viewer judged the overall emotion of the crowd. In each trial, one face in the crowd was highlighted by a salient cue, capturing spatial attention before the participants viewed the entire group. American participants’ judgment of group emotion more strongly weighed the attended individual face than Korean participants, suggesting a greater influence of local information on global perception. Our results showed that different attentional modes between cultural groups modulate social-emotional processing underlying people’s perceptions and attributions.