The ability to discriminate self- and non-self voice cues is a fundamental aspect of self-awareness and subserves self-monitoring during verbal communication. Nonetheless, the neurofunctional ...underpinnings of self-voice perception and recognition are still poorly understood. Moreover, how attention and stimulus complexity influence the processing and recognition of one's own voice remains to be clarified. Using an oddball task, the current study investigated how self-relevance and stimulus type interact during selective attention to voices, and how they affect the representation of regularity during voice perception.
Event-related potentials (ERPs) were recorded from 18 right-handed males. Pre-recorded self-generated (SGV) and non-self (NSV) voices, consisting of a nonverbal vocalization (vocalization condition) or disyllabic word (word condition), were presented as either standard or target stimuli in different experimental blocks.
The results showed increased N2 amplitude to SGV relative to NSV stimuli. Stimulus type modulated later processing stages only: P3 amplitude was increased for SGV relative to NSV words, whereas no differences between SGV and NSV were observed in the case of vocalizations. Moreover, SGV standards elicited reduced N1 and P2 amplitude relative to NSV standards.
These findings revealed that the self-voice grabs more attention when listeners are exposed to words but not vocalizations. Further, they indicate that detection of regularity in an auditory stream is facilitated for one's own voice at early processing stages. Together, they demonstrate that self-relevance affects attention to voices differently as a function of stimulus type.
•Effects of self-relevance on P3 amplitude to voices depend on stimulus complexity.•N2 amplitude is affected by the self-relevance of voice stimuli.•Extraction and categorization of acoustic properties is facilitated for the self-voice.
Previous studies looking at how Mind Wandering (MW) impacts performance in distinct Focused Attention (FA) systems, using the Attention Network Task (ANT), showed that the presence of pure MW ...thoughts did not impact the overall performance of ANT (alert, orienting and conflict) performance. However, it still remains unclear if the lack of interference of MW in the ANT, reported at the behavioral level, has a neurophysiological correspondence. We hypothesize that a distinct cortical processing may be required to meet attentional demands during MW. The objective of the present study was to test if, given similar levels of ANT performance, individuals predominantly focusing on MW or FA show distinct cortical processing. Thirty-three healthy participants underwent an EEG high-density acquisition while they were performing the ANT. MW was assessed following the ANT using an adapted version of the Resting State Questionnaire (ReSQ). The following ERP's were analyzed: pN1, pP1, P1, N1, pN, and P3. At the behavioral level, participants were slower and less accurate when responding to incongruent than to congruent targets (conflict effect), benefiting from the presentation of the double (alerting effect) and spatial (orienting effect) cues. Consistent with the behavioral data, ERP's waves were discriminative of distinct attentional effects. However, these results remained true irrespective of the MW condition, suggesting that MW imposed no additional cortical demand in alert, orienting, and conflict attention tasks.
The human voice is a primary tool for verbal and nonverbal communication. Studies on laughter emphasize a distinction between spontaneous laughter, which reflects a genuinely felt emotion, and ...volitional laughter, associated with more intentional communicative acts. Listeners can reliably differentiate the two. It remains unclear, however, if they can detect authenticity in other vocalizations, and whether authenticity determines the affective and social impressions that we form about others. Here, 137 participants listened to laughs and cries that could be spontaneous or volitional and rated them on authenticity, valence, arousal, trustworthiness and dominance. Bayesian mixed models indicated that listeners detect authenticity similarly well in laughter and crying. Speakers were also perceived to be more trustworthy, and in a higher arousal state, when their laughs and cries were spontaneous. Moreover, spontaneous laughs were evaluated as more positive than volitional ones, and we found that the same acoustic features predicted perceived authenticity and trustworthiness in laughter: high pitch, spectral variability and less voicing. For crying, associations between acoustic features and ratings were less reliable. These findings indicate that emotional authenticity shapes affective and social trait inferences from voices, and that the ability to detect authenticity in vocalizations is not limited to laughter. This article is part of the theme issue 'Voice modulation: from origin and mechanism to social impact (Part I)'.
The ability to distinguish spontaneous from volitional emotional expressions is an important social skill. How do blind individuals perceive emotional authenticity? Unlike sighted individuals, they ...cannot rely on facial and body language cues, relying instead on vocal cues alone. Here, we combined behavioral and ERP measures to investigate authenticity perception in laughter and crying in individuals with early- or late-blindness onset. Early-blind, late-blind, and sighted control participants (n = 17 per group, N = 51) completed authenticity and emotion discrimination tasks while EEG data were recorded. The stimuli consisted of laughs and cries that were either spontaneous or volitional. The ERP analysis focused on the N1, P2, and late positive potential (LPP). Behaviorally, early-blind participants showed intact authenticity perception, but late-blind participants performed worse than controls. There were no group differences in the emotion discrimination task. In brain responses, all groups were sensitive to laughter authenticity at the P2 stage, and to crying authenticity at the early LPP stage. Nevertheless, only early-blind participants were sensitive to crying authenticity at the N1 and middle LPP stages, and to laughter authenticity at the early LPP stage. Furthermore, early-blind and sighted participants were more sensitive than late-blind ones to crying authenticity at the P2 and late LPP stages.
Altogether, these findings suggest that early blindness relates to facilitated brain processing of authenticity in voices, both at early sensory and late cognitive-evaluative stages. Late-onset blindness, in contrast, relates to decreased sensitivity to authenticity at behavioral and brain levels.
Presentamos un estudio psicolingüístico de una afección genética rara por microdelección que cursa con discapacidad intelectual y una relativamente buena preservación de las habilidades lingüísticas, ...el síndrome de Smith-Magenis (SSM) del que no existe ninguna descripción del perfil cognitivo y psicolingüístico con población española. Se ha caracterizado el perfil cognitivo y psicolingüístico del SSM a partir de una muestra de 9 pacientes niños españoles con edades comprendidas entre 7 y 11 años. El perfil cognitivo y psicolingüístico se evaluó con las pruebas estandarizadas Escala de Inteligencia de Wechsler para niños-IV, Test Illinois de Aptitudes Psicolingüísticas y el Test de Vocabulario en Imágenes Peabody. Los resultados sugieren un perfil específico caracterizado por un C.I. bajo y relativamente buenas habilidades para integrar información proveniente de diferentes canales; las dificultades observadas se centran en problemas de atención y comportamiento hiperactivo que se han puesto de manifiesto en la interacción durante la evaluación. Este estudio es la primera evidencia existente acerca de la descripción del perfil cognitivo y psicolingüístico en pacientes con SSM en España y ayudará a establecer pautas de intervención diferencial educativas y psicoterapéuticas respecto de otras enfermedades genéticas con baja incidencia con perfiles similares a los estudiados.
Previous research has documented perceptual and brain differences between spontaneous and volitional emotional vocalizations. However, the time course of emotional authenticity processing remains ...unclear. We used event-related potentials (ERPs) to address this question, and we focused on the processing of laughter and crying. We additionally tested whether the neural encoding of authenticity is influenced by attention, by manipulating task focus (authenticity versus emotional category) and visual condition (with versus without visual deprivation). ERPs were recorded from 43 participants while they listened to vocalizations and evaluated their authenticity (volitional versus spontaneous) or emotional meaning (sad versus amused). Twenty-two of the participants were blindfolded and tested in a dark room, and 21 were tested in standard visual conditions. As compared to volitional vocalizations, spontaneous ones were associated with reduced N1 amplitude in the case of laughter, and increased P2 in the case of crying. At later cognitive processing stages, more positive amplitudes were observed for spontaneous (versus volitional) laughs and cries (1000–1400 msec), with earlier effects for laughs (700–1000 msec). Visual condition affected brain responses to emotional authenticity at early (P2 range) and late processing stages (middle and late LPP ranges). Task focus did not influence neural responses to authenticity. Our findings suggest that authenticity information is encoded early and automatically during vocal emotional processing. They also point to a potentially faster encoding of authenticity in laughter compared to crying.
•Self-generated voice elicited increased P3 amplitude relative to non-self voice stimuli.•N2 amplitude was more negative for the self-generated relative to non-self voice.•Modulation of P3 by voice ...identity was independent of voice acoustic properties.
Self-related stimuli—such as one’s own face or name—seem to be processed differently from non-self stimuli and to involve greater attentional resources, as indexed by larger amplitude of the P3 event-related potential (ERP) component. Nonetheless, the differential processing of self-related vs. non-self information using voice stimuli is still poorly understood. The present study investigated the electrophysiological correlates of processing self-generated vs. non-self voice stimuli, when they are in the focus of attention.
ERP data were recorded from twenty right-handed healthy males during an oddball task comprising pre-recorded self-generated (SGV) and non-self (NSV) voice stimuli. Both voices were used as standard and deviant stimuli in distinct experimental blocks. SGV was found to elicit more negative N2 and more positive P3 in comparison with NSV. No association was found between ERP data and voice acoustic properties.
These findings demonstrated an earlier and later attentional bias to self-generated relative to non-self voice stimuli. They suggest that one’s own voice representation may have a greater affective salience than an unfamiliar voice, confirming the modulatory role of salience on P3.
Auditory verbal hallucinations (AVH) are a core symptom of schizophrenia. Like "real" voices, AVH carry a rich amount of linguistic and paralinguistic cues that convey not only speech, but also ...affect and identity, information. Disturbed processing of voice identity, affective, and speech information has been reported in patients with schizophrenia. More recent evidence has suggested a link between voice-processing abnormalities and specific clinical symptoms of schizophrenia, especially AVH. It is still not well understood, however, to what extent these dimensions are impaired and how abnormalities in these processes might contribute to AVH. In this review, we consider behavioral, neuroimaging, and electrophysiological data to investigate the speech, identity, and affective dimensions of voice processing in schizophrenia, and we discuss how abnormalities in these processes might help to elucidate the mechanisms underlying specific phenomenological features of AVH. Schizophrenia patients exhibit behavioral and neural disturbances in the three dimensions of voice processing. Evidence suggesting a role of dysfunctional voice processing in AVH seems to be stronger for the identity and speech dimensions than for the affective domain.
The ability to differentiate one’s own voice from the voice of somebody else plays a critical role in successful verbal self-monitoring processes and in communication. However, most of the existing ...studies have only focused on the sensory correlates of self-generated voice processing, whereas the effects of attentional demands and stimulus complexity on self-generated voice processing remain largely unknown. In this study, we investigated the effects of stimulus complexity on the preattentive processing of self and nonself voice stimuli. Event-related potentials (ERPs) were recorded from 17 healthy males who watched a silent movie while ignoring prerecorded self-generated (SGV) and nonself (NSV) voice stimuli, consisting of a vocalization (vocalization category condition: VCC) or of a disyllabic word (word category condition: WCC). All voice stimuli were presented as standard and deviant events in four distinct oddball sequences. The mismatch negativity (MMN) ERP component peaked earlier for NSV than for SGV stimuli. Moreover, when compared with SGV stimuli, the P3a amplitude was increased for NSV stimuli in the VCC only, whereas in the WCC no significant differences were found between the two voice types. These findings suggest differences in the time course of automatic detection of a change in voice identity. In addition, they suggest that stimulus complexity modulates the magnitude of the orienting response to SGV and NSV stimuli, extending previous findings on self-voice processing.
According to the author's narrative model of change, clients may maintain a problematic self-stability across therapy, leading to therapeutic failure, by a mutual in-feeding process, which involves a ...cyclical movement between two opposing parts of the self. During innovative moments (IMs) in the therapy dialogue, clients' dominant self-narrative is interrupted by exceptions to that self-narrative, but subsequently the dominant self-narrative returns. The authors identified return-to-the-problem markers (RPMs), which are empirical indicators of the mutual in-feeding process, in passages containing IMs in 10 cases of narrative therapy (five good-outcome cases and five poor-outcome cases) with females who were victims of intimate violence. The poor-outcome group had a significantly higher percentage of IMs with RPMs than the good-outcome group. The results suggest that therapeutic failures may reflect a systematic return to a dominant self-narrative after the emergence of novelties (IMs).