Importantly, both the neural and behavioural maturation that occur during adolescence are thought to be shaped by individuals' unique experiences with their social environment (Nelson, 2017; ...Tottenham, 2014). ...adolescence is often considered a sensitive period for the development of social cognitive functions (Blakemore & Mills, 2014; Crone & Dahl, 2012). Emotion recognition, or the ability to recognize others' emotions based on nonverbal cues (e.g., facial expressions, gestures and postures, tone of voice), is essential to social competence (Halberstadt, Denham, & Dunsmore, 2001). ...age-related increases in white matter and neural activity in face processing areas of the brain have been associated with greater accuracy in recognizing facial expressions of emotion (facial ER) in 7- to 37year-olds (Cohen Kadosh et al., 2012). The brain model for the extraction of emotional content from prosodic cues involves the integration within the superior temporal sulcus and gyrus (STS, STG) of information from the primary auditory cortex in Heschl's gyrus (A1) and the "temporal voice area" (TVA; Belin, Zatorre, & Ahad, 2002; Belin et al., 2000; Ethofer et al., 2006b; Ethofer et al., 2012; Wiethoff et al., 2008) with subcortical structures such as the amygdala and striatum (Bach et al., 2008; Ethofer et al., 2009b).
Studies of cognitive, perceptual, and socio-emotional development in infancy have made extensive use of looking time as an outcome measure. These procedures typically rely on assessing infant ...looking; investigators have primarily focused on mean looking times for groups of infants. This practice, however, obscures information about the individual looks of individual infants. This project addressed this gap by testing the temporal dependency hypothesis: The duration of an infant's successive looks at a target are positively predicted by the duration of the infant's previous looks at that target. Temporal dependency was found in the Face-to-Face/Still-Face procedure at 6 months (n = 109); the duration of successive looks at the parent were predicted by the duration of previous looks at the parent. Each individual infant's level of temporal dependency predicted joint attention on the Early Social Communication Scales (ESCS) at 9 months, but did not predict measures of joint attention on the ESCS at 6 and 12 months, language on the Mullen Scales of Early Learning at 12, 24, or 36 months, or temperament assessed with the Infant Behavior Questionnaire at 12 months. Temporal dependency was also found in an infant-controlled habituation procedure at 6 months (n = 92); the duration of successive looks at a recorded face were predicted by the duration of previous looks at the recorded face. In two contexts, individual infant looks were predictable; past behavior constrained current behavior. Non-random variation due to temporal dependency is an under-appreciated influence on looking behavior in both interactive and non-interactive contexts.
We investigated the dynamics of head motion in parents and infants during an age-appropriate, well-validated emotion induction, the Face-to-Face/Still-Face procedure. Participants were 12 ethnically ...diverse 6-month-old infants and their mother or father. During infant gaze toward the parent, infant angular amplitude and velocity of pitch and yaw decreased from face-to-face (FF) to still-face (SF) episodes and remained lower in the following Reunion (RE). During infant gaze away from the parent, angular velocity of pitch decreased from FF to SF and remained lower in the RE. Windowed cross-correlation suggested strong bidirectional effects with frequent shifts in the direction of influence. The number of significant positive and negative peaks was higher during FF than RE. Gaze toward and away from the parent was modestly predicted by head orientation. Together, these findings suggest that head motion is strongly related to age-appropriate emotion challenge, are consistent with the hypothesis that perturbations of normal responsiveness carry-over even after the parent resumes normal responsiveness in the reunion, and that there are frequent changes in direction of influence in the postural domain.
To model the dynamics of social interaction, it is necessary both to detect specific Action Units (AUs) and variation in their intensity and coordination over time. An automated method that performs ...well when detecting occurrence may or may not perform well for intensity measurements. We compared two dimensionality reduction approaches - Principal Components Analysis with Large Margin Nearest Neighbor (PCA+LMNN) and Laplacian Eigenmap - and two classifiers, SVM and K-Nearest Neighbor. Twelve infants were video-recorded during face-to-face interactions with their mothers. AUs related to positive and negative affect were manually coded from the video by certified FACS coders. Facial features were tracked using Active Appearance Models (AAM) and registered to a canonical view before extracting Histogram of Oriented Gradients (HOG) features. All possible combinations of dimensionality reduction approaches and classifiers were tested using a leave-onesubject-out cross-validation. For detecting consistency (i.e. reliability as measured by ICC), PCA+LMNN and SVM classifiers gave best results.
Intensity measurements of infant facial expressions are central to understand emotion-mediated interactions and emotional development. We evaluate alternative image representations for automatic ...measurement of the intensity of spontaneous facial Action Units (AUs) related to infant emotion expression. Twelve infants were video-recorded during face-to-face interactions with their mothers. Facial features were tracked using active appearance models (AAMs) and registered to a canonical view. Three feature representations were compared: shape and grey scale texture, Histogram of Oriented Gradients (HOG), and Local Binary Pattern Histograms (LBPH). To reduce the high dimensionality of the appearance features (grey scale texture, HOG, and LBPH), a non-linear algorithm was used (Laplacian Eigenmaps). For each representation, support vector machine classifiers were used to learn six gradations of AU intensity (0 to maximal). The target AUs were those central to positive and negative infant emotion. Shape plus grey scale texture performed best for AUs that involve non-rigid deformations of permanent facial features (e.g., AU 12 and AU 20). These findings suggest that AU intensity detection may be maximized by choosing feature representations best suited for specific AU.
Discrete emotion theories emphasize the modularity of facial expressions, while functionalist theories suggest that a single facial action may have a common meaning across expressions. Smiles ...involving the Duchenne marker, eye constriction causing crow's feet, are perceived as intensely positive and sincere. To test whether the Duchenne marker is a general index of intensity and sincerity, we contrasted positive and negative expressions with and without the Duchenne marker in a binocular rivalry paradigm. Both smiles and sad expressions involving the Duchenne marker were perceived longer than non-Duchenne expressions, and participants rated all Duchenne expressions as more affectively intense and more sincere than their non-Duchenne counterparts. Correlations between perceptual dominance and ratings suggested that the Duchenne marker increased the dominance of smiles and sad expressions by increasing their perceived affective intensity. The results provide evidence in favor of Darwin's hypothesis that specific facial actions have a general function (conveying affect intensification and sincerity) across expressions.
Many high-profile societal problems involve an individual or group repeatedly attacking another - from child-parent disputes, sexual violence against women, civil unrest, violent conflicts and acts ...of terror, to current cyber-attacks on national infrastructure and ultrafast cyber-trades attacking stockholders. There is an urgent need to quantify the likely severity and timing of such future acts, shed light on likely perpetrators, and identify intervention strategies. Here we present a combined analysis of multiple datasets across all these domains which account for >100,000 events, and show that a simple mathematical law can benchmark them all. We derive this benchmark and interpret it, using a minimal mechanistic model grounded by state-of-the-art fieldwork. Our findings provide quantitative predictions concerning future attacks; a tool to help detect common perpetrators and abnormal behaviors; insight into the trajectory of a 'lone wolf'; identification of a critical threshold for spreading a message or idea among perpetrators; an intervention strategy to erode the most lethal clusters; and more broadly, a quantitative starting point for cross-disciplinary theorizing about human aggression at the individual and group level, in both real and online worlds.