The human brain shows distinct lateralized activation patterns for a range of cognitive processes. One such function, which is thought to be lateralized to the right hemisphere (RH), is human face ...processing. Its importance for social communication and interaction has led to a plethora of studies investigating face processing in health and disease. Temporally highly resolved methods, like event-related potentials (ERPs), allow for a detailed characterization of different processing stages and their specific lateralization patterns. This systematic review aimed at disentangling some of the contradictory findings regarding the RH specialization in face processing focusing on ERP research in healthy participants. Two databases were searched for studies that investigated left and right electrodes while participants viewed (mostly neutral) facial stimuli. The included studies used a variety of different tasks, which ranged from passive viewing to memorizing faces. The final data selection highlights, that strongest lateralization to the RH was found for the N170, especially for right-handed young male participants. Left-handed, female, and older participants showed less consistent lateralization patterns. Other ERP components like the P1, P2, N2, P3, and the N400 were overall less clearly lateralized. The current review highlights that many of the assumed lateralization patterns are less clear than previously thought and that the variety of stimuli, tasks, and EEG setups used, might contribute to the ambiguous findings.
The COVID-19 pandemic has introduced new challenges for governments and individuals. Unprecedented efforts at reducing virus transmission launched a novel arena for human face recognition in which ...faces are partially occluded with masks. Previous studies have shown that masks decrease accuracy of face identity and emotion recognition. The current study focuses on the impact of masks on the speed of processing of these and other important social dimensions. Here we provide a systematic assessment of the impact of COVID-19 masks on facial identity, emotion, gender, and age. Four experiments (
= 116) were conducted in which participants categorized faces on a predefined dimension (e.g., emotion). Both speed and accuracy were measured. The results revealed that masks hindered the perception of virtually all tested facial dimensions (i.e., emotion, gender, age, and identity), interfering with normal speed and accuracy of categorization. We also found that the unwarranted effects of masks were not due to holistic processes, because the Face Inversion Effect (FIE) was generally not larger with unmasked compared with masked faces. Moreover, we found that the impact of masks is not automatic and that under some contexts observers can control at least part of their detrimental effects.
Making new acquaintances requires learning to recognise previously unfamiliar faces. In the current study, we investigated this process by staging real-world social interactions between actors and ...the participants. Participants completed a face-matching behavioural task in which they matched photographs of the actors (whom they had yet to meet), or faces similar to the actors (henceforth called foils). Participants were then scanned using functional magnetic resonance imaging (fMRI) while viewing photographs of actors and foils. Immediately after exiting the scanner, participants met the actors for the first time and interacted with them for 10 min. On subsequent days, participants completed a second behavioural experiment and then a second fMRI scan. Prior to each session, actors again interacted with the participants for 10 min. Behavioural results showed that social interactions improved performance accuracy when matching actor photographs, but not foil photographs. The fMRI analysis revealed a difference in the neural response to actor photographs and foil photographs across all regions of interest (ROIs) only after social interactions had occurred. Our results demonstrate that short social interactions were sufficient to learn and discriminate previously unfamiliar individuals. Moreover, these learning effects were present in brain areas involved in face processing and memory.
Artificial intelligence (AI)-synthesized text, audio, image, and video are being weaponized for the purposes of nonconsensual intimate imagery, financial fraud, and disinformation campaigns. Our ...evaluation of the photorealism of AI-synthesized faces indicates that synthesis engines have passed through the uncanny valley and are capable of creating faces that are indistinguishable-and more trustworthy-than real faces.
A wealth of studies have shown that humans are remarkably poor at determining whether two face images show the same person or not (face matching). Given the prevalence of photo-ID, and the fact that ...people employed to check photo-ID are typically unfamiliar with the person pictured, there is a need to improve unfamiliar face matching accuracy. One method of improvement is to have participants complete the task in a pair, which results in subsequent improvements in the low performer (“the pairs training effect”). Here, we sought to replicate the original finding, to test the longevity of the pairs training effect, and to shed light on the potential underlying mechanisms. In two experiments, we replicated the pairs training effect and showed it is maintained after a delay (Experiment 1). We found no differences between high and low performers in confidence (Experiment 1) or response times (Experiment 2), and the content of the pairs’ discussions (Experiment 2) did not explain the results. The pairs training effect in unfamiliar face matching is robust, but the mechanisms underlying the effects remain as yet unexplained.
•We examined how children aged 4–11 form interpretations of ambiguous situations.•Facial niceness influenced interpretations of ambiguous behavior by age 4.•Facial niceness influenced interpretations ...of ambiguous intentions by age 6.•Our findings suggest that facial biases impact children’s behavioral evaluations.
Children infer personality traits from faces when they are asked explicitly which face appears nice or mean. Less is known about how children use face–trait information implicitly to make behavioral evaluations. We used the Ambiguous Situations Protocol to explore how children use face–trait information to form interpretations of ambiguous situations when the behavior or intention of the target child was unclear. On each trial, children (N = 144, age range = 4–11.95 years; 74 girls, 67 boys, 3 gender not specified; 70% White, 10% other or mixed race, 5% Asian, 4% Black, 1% Indigenous, 9% not specified) viewed a child’s face (previously rated high or low in niceness) before seeing the child’s face embedded within an ambiguous scene (Scene Task) or hearing a vignette about a misbehavior done by that child (Misbehavior Task). Children described what was happening in each scene and indicated whether each misbehavior was done on purpose or by accident. Children also rated the behavior of each child and indicated whether the child would be a good friend. Facial niceness influenced children’s interpretations of ambiguous behavior (Scene Task) by 4 years of age, and ambiguous intentions (Misbehavior Task) by 6 years. Our results suggest that the use of face–trait cues to form interpretations of ambiguous behavior emerges early in childhood, a bias that may lead to differential treatment for peers perceived with a high-nice face versus a low-nice face.
Trait inferences from faces are pervasive, but sometimes misleading. Past research indicates Americans infer hunting and gathering ability from others' faces, but the accuracy of these perceptions ...remains unknown. In three studies, we test whether Americans can accurately perceive foraging ability from faces. We used three datasets from two traditional subsistence societies (the Hadza and the Tsimane) in which individuals were photographed and evaluated by their peers on their ability to hunt or gather effectively (N = 175). US MTurkers (N = 579) then evaluated the photos for foraging ability. We found that MTurkers' perceptions of men consistently tracked peer-evaluated hunting ability (overall r = 0.25), suggesting that naïve perceptions of men's productivity from a face photo alone reflect actual hunting ability. MTurkers' perceptions of women's productivity inversely correlated with their peer-evaluated gathering ability, however. We discuss potential mechanisms and implications for research on social perception.
The classical core system of face perception consists of the occipital face area (OFA), fusiform face area (FFA), and posterior superior temporal sulcus (STS). The functional interaction within this ...network, more specifically the effective connectivity, was first described by Fairhall and Ishai (2007) using functional magnetic resonance imaging and dynamic causal modeling. They proposed that the core system is hierarchically organized; information is processed in a parallel and predominantly feed-forward fashion from the OFA to downstream regions such as the FFA and STS, with no lateral connectivity, i.e., no connectivity between the two downstream regions (FFA and STS). Over a decade later, we conducted a conceptual replication of their model using four different functional magnetic resonance imaging data sets. The effective connectivity within the core system was assessed with contemporary versions of dynamic causal modeling.
The resulting model of the core system of face perception was densely interconnected. Using hierarchical linear modeling, we identified several significant forward, backward, and lateral connections in the core system of face perception across the data sets. Face perception increased the forward connectivity from the OFA to the FFA and OFA to the STS and increased the inhibitory backward connectivity from the FFA to the OFA, as well as the lateral connectivity between the FFA and STS. Emotion perception increased forward connectivity between the OFA and STS and decreased the lateral connectivity between the FFA and STS. Face familiarity did not significantly alter these connections.
Our results revise the 2007 model of the core system of face perception. We discuss the potential meaning of the resulting model parameters and propose that our revised model is a suitable working model for further studies assessing the functional interaction within the core system of face perception. Our work further emphasizes the general importance of conceptual replications.
•We revised an early connectivity model of face perception using multiple data sets.•Connectivity estimates were highly similar across different data sets.•The core system of face perception is highly interconnected.
•Pupil response and judgments about faces/houses measured in neurotypical subjects.•Pupil responses were larger for familiar faces relative to unfamiliar faces.•Strong interocular suppression blocked ...overt familiar face and house recognition.•Neither face/house discrimination nor pupil familiarity response were affected.•These results provide a model for covert face recognition in prosopagnosia.
In prosopagnosia, brain lesions impair overt face recognition, but not face detection, and may coexist with residual covert recognition of familiar faces. Previous studies that simulated covert recognition in healthy individuals have impaired face detection as well as recognition, thus not fully mirroring the deficits in prosopagnosia. We evaluated a model of covert recognition based on continuous flash suppression (CFS). Familiar and unfamiliar faces and houses were masked while participants performed two discrimination tasks. With increased suppression, face/house discrimination remained largely intact, but face familiarity discrimination deteriorated. Covert recognition was present across all masking levels, evinced by higher pupil dilation to familiar than unfamiliar faces. Pupil dilation was uncorrelated with overt performance across subjects. Thus, CFS can impede overt face recognition without disrupting covert recognition and face detection, mirroring critical features of prosopagnosia. CFS could be used to uncover shared neural mechanisms of covert recognition in prosopagnosic patients and neurotypicals.
Are We Face Experts? Young, Andrew W.; Burton, A. Mike
Trends in cognitive sciences,
February 2018, 2018-02-00, 20180201, Letnik:
22, Številka:
2
Journal Article
Recenzirano
Odprti dostop
According to a widely used theoretical perspective, our everyday experiences lead us to become natural experts at perceiving and recognising human faces. However, there has been considerable debate ...about this view. We discuss criteria for expertise and show how the debate over face expertise has often missed key points concerning the role and nature of face familiarity. For identity recognition, most of us show only limited expertise with unfamiliar faces. Carefully evaluating the senses in which it is appropriate or inappropriate to assert that we are face experts leads to the conclusion that we are, in effect, familiar face experts.
There is a wide range of ability to recognise the identities of unfamiliar faces in the population.
This variability is influenced by genes but does not seem to be amenable to training. Even having a job that requires matching unfamiliar faces does not lead to much improvement.
Images of faces seen in everyday life are highly variable, and much of the problem many of us experience with unfamiliar face identity comes from not being able to determine whether the variability is image-related or identity-related.
Although failures of familiar face recognition do sometimes occur, our ability to recognise familiar faces is mostly excellent, is able to cope with degraded images, and is largely unaffected by image-related differences.
Familiar face recognition meets broad criteria for expertise, but recognition of unfamiliar faces is expert only in the restricted sense that it is influenced by experience.