Perceptual awareness in infants during the first year of life is understudied, despite the philosophical, scientific, and clinical importance of understanding how and when consciousness emerges ...during human brain development. Although parents are undoubtedly convinced that their infant is conscious, the lack of adequate experimental paradigms to address this question in preverbal infants has been a hindrance to research on this topic. However, recent behavioral and brain imaging studies have shown that infants are engaged in complex learning from an early age and that their brains are more structured than traditionally thought. I will present a rapid overview of these results, which might provide indirect evidence of early perceptual awareness and then describe how a more systematic approach to this question could stand within the framework of global workspace theory, which identifies specific signatures of conscious perception in adults. Relying on these brain signatures as a benchmark for conscious perception, we can deduce that it exists in the second half of the first year, whereas the evidence before the age of 5 months is less solid, mainly because of the paucity of studies. The question of conscious perception before term remains open, with the possibility of short periods of conscious perception, which would facilitate early learning. Advances in brain imaging and growing interest in this subject should enable us to gain a better understanding of this important issue in the years to come.
Understanding social interaction requires processing social agents and their relationship. Latest results show that much of this process is visually solved: visual areas can represent multiple people ...encoding emergent information about their interaction that is not explained by the response to the individuals alone. A neural signature of this process is an increased response in visual areas, to faceto-face (seemingly interacting) people, relative to people presented as unrelated (back-to-back). This effect highlighted a network of visual areas for representing relational information. How is this network organized? Using functional MRI, we measured brain activity of healthy female and male humans (N=42), in response to images of two faces or two (head-blurred) bodies, facing toward or away from each other. Taking the facing>non-facing effect as signature of relation perception, we found that relations between faces and between bodies were coded in distinct areas, mirroring the categorical representation of faces and bodies in visual cortex. Additional analyses suggest the existence of a third network encoding relations between (non-social) objects. Finally, a separate occipitotemporal network showed generalization of relational information across body, face and non-social object dyads (multivariate-pattern classification analysis), revealing shared properties of relations across categories. In sum, beyond single entities, visual cortex encodes the relations that bind multiple entities into relationships; it does so in a category-selective fashion, thus respecting a general organizing principle of representation in high-level vision. Visual areas encoding visual relational information can reveal the processing of emergent properties of social (and non-social) interaction which trigger inferential processes.
The involvement of the sensorimotor system in the perception of painful actions has been repeatedly demonstrated. Yet the cognitive processes corresponding to sensorimotor activations have not been ...identified. In particular, the respective role of higher-level and lower-level action representations such as goals and grips in the recognition of painful actions is not clear. Previous research has shown that in a neutral context, higher-level action representations (goals) are prioritized over lower-level action representations (grips) and guide action recognition. The present study evaluates to what extent the general priority given to goal-related information in the processing of visual actions can be modulated by a context of pain. We used the action violation paradigm developed by van Elk et al. (2008). In the present action tasks, participants had to judge whether the grip or the goal of object-directed actions displayed in photographs was correct or not. The actress in the photograph could show either a neutral facial expression or a facial expression of pain. In the control task, they had to judge whether the actress expressed pain. In the action tasks, goals influenced grip judgements more than grips influenced goal judgements overall, corroborating the priority given to goal-related information previously reported. Critically, the impact of irrelevant goal-related information on the identification of incorrect grips disappeared in the pain context. Moreover, judgements in the control task were similarly influenced by grip and goal-related information. Results suggest that a context of pain reduces the reliance on higher-level action for action judgments. Findings provide novel directions regarding the cognitive and brain mechanisms involved in action processing in painful situations and support pluralist views of action understanding.
Speech comprehension is enhanced when preceded (or accompanied) by a congruent rhythmic prime reflecting the metrical sentence structure. Although these phenomena have been described for auditory and ...motor primes separately, their respective and synergistic contribution has not been addressed. In this experiment, participants performed a speech comprehension task on degraded speech signals that were preceded by a rhythmic prime that could be auditory, motor or audiomotor. Both auditory and audiomotor rhythmic primes facilitated speech comprehension speed. While the presence of a purely motor prime (unpaced tapping) did not globally benefit speech comprehension, comprehension accuracy scaled with the regularity of motor tapping. In order to investigate inter-individual variability, participants also performed a Spontaneous Speech Synchronization test. The strength of the estimated perception-production coupling correlated positively with overall speech comprehension scores. These findings are discussed in the framework of the dynamic attending and active sensing theories.
Orofacial somatosensory inputs play an important role in speech motor control and speech learning. Since receiving specific auditory-somatosensory inputs during speech perceptual training alters ...speech perception, similar perceptual training could also alter speech production. We examined whether the production performance was changed by perceptual training with orofacial somatosensory inputs.
We focused on the French vowels /e/ and /ø/, contrasted in their articulation by horizontal gestures. Perceptual training consisted of a vowel identification task contrasting /e/ and /ø/. Along with training, for the first group of participants, somatosensory stimulation was applied as facial skin stretch in backward direction. We recorded the target vowels uttered by the participants before and after the perceptual training and compared their F1, F2, and F3 formants. We also tested a control group with no somatosensory stimulation and another somatosensory group with a different vowel continuum (/e/-/i/) for perceptual training.
Perceptual training with somatosensory stimulation induced changes in F2 and F3 in the produced vowel sounds. F2 decreased consistently in the two somatosensory groups. F3 increased following the /e/-/ø/ training and decreased following the /e/-/i/ training. F2 change was significantly correlated with the perceptual shift between the first and second half of the training phase in the somatosensory group with the /e/-/ø/ training, but not with the /e/-/i/ training. The control group displayed no effect on F2 and F3, and just a tendency of F1 increase.
The results suggest that somatosensory inputs associated to speech sound inputs can play a role in speech training and learning in both production and perception.
The human action observation network (AON) encompasses brain areas consistently engaged when we observe other's actions. Although the core nodes of the AON are present from childhood, it is not known ...to what extent they are sensitive to different action features during development. Because social cognitive abilities continue to mature during adolescence, the AON response to socially-oriented actions, but not to object-related actions, may differ in adolescents and adults. To test this hypothesis, we scanned with functional magnetic resonance imaging (fMRI) male and female typically-developing teenagers ( n = 28; 13 females) and adults ( n = 25; 14 females) while they passively watched videos of manual actions varying along two dimensions: sociality (i.e., directed toward another person or not) and transitivity (i.e., involving an object or not). We found that action observation recruited the same fronto-parietal and occipito-temporal regions in adults and adolescents. The modulation of voxel-wise activity according to the social or transitive nature of the action was similar in both groups of participants. Multivariate pattern analysis, however, revealed that decoding accuracies in intraparietal sulcus (IPS)/superior parietal lobe (SPL) for both sociality and transitivity were lower for adolescents compared with adults. In addition, in the lateral occipital temporal cortex (LOTC), generalization of decoding across the orthogonal dimension was lower for sociality only in adolescents. These findings indicate that the representation of the content of others' actions, and in particular their social dimension, in the adolescent AON is still not as robust as in adults. SIGNIFICANCE STATEMENT The activity of the action observation network (AON) in the human brain is modulated according to the purpose of the observed action, in particular the extent to which it involves interaction with an object or with another person. How this conceptual representation of actions is implemented during development is largely unknown. Here, using multivoxel pattern analysis (MVPA) of functional magnetic resonance imaging (fMRI) data, we discovered that, while the action observation network is in place in adolescence, the fine-grain organization of its posterior regions is less robust than in adults to decode the abstract social dimensions of an action. This finding highlights the late maturation of social processing in the human brain.
Purpose: For voice perception, two voice cues, the fundamental frequency ( f o ) and/or vocal tract length (VTL), seem to largely contribute to identification of voices and speaker characteristics. ...Acoustic content related to these voice cues is altered in cochlear implant transmitted speech, rendering voice perception difficult for the implant user. In everyday listening, there could be some facilitation from top–down compensatory mechanisms such as from use of linguistic content. Recently, we have shown a lexical content benefit on just-noticeable differences (JNDs) in VTL perception, which was not affected by vocoding. Whether this observed benefit relates to lexicality or phonemic content and whether additional sentence information can affect voice cue perception as well were investigated in this study. Method: This study examined lexical benefit on VTL perception, by comparing words, time-reversed words, and nonwords, to investigate the contribution of lexical (words vs. nonwords) or phonetic (nonwords vs. reversed words) information. In addition, we investigated the effect of amount of speech (auditory) information on f o and VTL voice cue perception, by comparing words to sentences. In both experiments, nonvocoded and vocoded auditory stimuli were presented. Results: The outcomes showed a replication of the detrimental effect reversed words have on VTL perception. Smaller JNDs were shown for stimuli containing lexical and/or phonemic information. Experiment 2 showed a benefit in processing full sentences compared to single words in both f o and VTL perception. In both experiments, there was an effect of vocoding, which only interacted with sentence information for f o . Conclusions: In addition to previous findings suggesting a lexical benefit, the current results show, more specifically, that lexical and phonemic information improves VTL perception. f o and VTL perception benefits from more sentence information compared to words. These results indicate that cochlear implant users may be able to partially compensate for voice cue perception difficulties by relying on the linguistic content and rich acoustic cues of everyday speech. Supplemental Material: https://doi.org/10.23641/asha.23796405
Cocaine induces many supranormal changes in neuronal activity in the brain, notably in learning‐ and reward‐related regions, in comparison with nondrug rewards—a difference that is thought to ...contribute to its relatively high addictive potential. However, when facing a choice between cocaine and a nondrug reward (e.g., water sweetened with saccharin), most rats do not choose cocaine, as one would expect from the extent and magnitude of its global activation of the brain, but instead choose the nondrug option. We recently showed that cocaine, though larger in magnitude, is also an inherently more delayed reward than sweet water, thereby explaining why it has less value during choice and why rats opt for the more immediate nondrug option. Here, we used a large‐scale Fos brain mapping approach to measure brain responses to each option in saccharin‐preferring rats, with the hope to identify brain regions whose activity may explain the preference for the nondrug option. In total, Fos expression was measured in 142 brain levels corresponding to 52 brain subregions and composing 5 brain macrosystems. Overall, our findings confirm in rats with a preference for saccharin that cocaine induces more global brain activation than the preferred nondrug option does. Only very few brain regions were uniquely activated by saccharin. They included regions involved in taste processing (i.e., anterior gustatory cortex) and also regions involved in processing reward delay and intertemporal choice (i.e., some components of the septohippocampal system and its connections with the lateral habenula).