Quantity estimation can be represented in either an analog or symbolic manner and recent evidence now suggests that analog and symbolic representation of quantities interact. Nonetheless, those two ...representational forms of quantities may be enhanced by convergent multisensory information. Here, we elucidate those interactions using high-density electroencephalography (EEG) and an audiovisual oddball paradigm. Participants were presented simultaneous audiovisual tokens in which the co-varying pitch of tones was combined with the embedded cardinality of dot patterns. Incongruencies were elicited independently from symbolic and non-symbolic modality within the audio-visual percept, violating the newly acquired rule that "the higher the pitch of the tone, the larger the cardinality of the figure." The effect of neural plasticity in symbolic and non-symbolic numerical representations of quantities was investigated through a cross-sectional design, comparing musicians to musically naïve controls. Individual's cortical activity was reconstructed and statistically modeled for a predefined time-window of the evoked response (130-170 ms). To summarize, we show that symbolic and non-symbolic processing of magnitudes is re-organized in cortical space, with professional musicians showing altered activity in motor and temporal areas. Thus, we argue that the symbolic representation of quantities is altered through musical training.
•We recorded MEG while participants listened to an audiobook•We developed a multivariate speech envelope tracking framework in source space•We characterize spectral clusters across delays and ...cortical areas•Speech-brain tracking mainly operates in δ and θ frequency channels•Higher and lower association areas operate exhibit different coupling delays
The systematic alignment of low-frequency brain oscillations with the acoustic speech envelope signal is well established and has been proposed to be crucial for actively perceiving speech. Previous studies investigating speech-brain coupling in source space are restricted to univariate pairwise approaches between brain and speech signals, and therefore speech tracking information in frequency-specific communication channels might be lacking. To address this, we propose a novel multivariate framework for estimating speech-brain coupling where neural variability from source-derived activity is taken into account along with the rate of envelope's amplitude change (derivative). We applied it in magnetoencephalographic (MEG) recordings while human participants (male and female) listened to one hour of continuous naturalistic speech, showing that a multivariate approach outperforms the corresponding univariate method in low- and high frequencies across frontal, motor, and temporal areas. Systematic comparisons revealed that the gain in low frequencies (0.6 - 0.8 Hz) was related to the envelope's rate of change whereas in higher frequencies (from 0.8 to 10 Hz) it was mostly related to the increased neural variability from source-derived cortical areas. Furthermore, following a non-negative matrix factorization approach we found distinct speech-brain components across time and cortical space related to speech processing. We confirm that speech envelope tracking operates mainly in two timescales (δ and θ frequency bands) and we extend those findings showing shorter coupling delays in auditory-related components and longer delays in higher-association frontal and motor components, indicating temporal differences of speech tracking and providing implications for hierarchical stimulus-driven speech processing.
Bodily rhythms such as respiration are increasingly acknowledged to modulate neural oscillations underlying human action, perception, and cognition. Conversely, the link between respiration and ...aperiodic brain activity - a non-oscillatory reflection of excitation-inhibition (E:I) balance - has remained unstudied. Aiming to disentangle potential respiration-related dynamics of periodic and aperiodic activity, we applied recently developed algorithms of time-resolved parameter estimation to resting-state MEG and EEG data from two labs (total N = 78 participants). We provide evidence that fluctuations of aperiodic brain activity (1/f slope) are phase-locked to the respiratory cycle, which suggests that spontaneous state shifts of excitation-inhibition balance are at least partly influenced by peripheral bodily signals. Moreover, differential temporal dynamics in their coupling to non-oscillatory and oscillatory activity raise the possibility of a functional distinction in the way each component is related to respiration. Our findings highlight the role of respiration as a physiological influence on brain signalling.
In conversational settings, seeing the speaker’s face elicits internal predictions about the upcoming acoustic utterance. Understanding how the listener’s cortical dynamics tune to the temporal ...statistics of audiovisual (AV) speech is thus essential. Using magnetoencephalography, we explored how large-scale frequency-specific dynamics of human brain activity adapt to AV speech delays. First, we show that the amplitude of phase-locked responses parametrically decreases with natural AV speech synchrony, a pattern that is consistent with predictive coding. Second, we show that the temporal statistics of AV speech affect large-scale oscillatory networks at multiple spatial and temporal resolutions. We demonstrate a spatial nestedness of oscillatory networks during the processing of AV speech: these oscillatory hierarchies are such that high-frequency activity (beta, gamma) is contingent on the phase response of low-frequency (delta, theta) networks. Our findings suggest that the endogenous temporal multiplexing of speech processing confers adaptability within the temporal regimes that are essential for speech comprehension.
Display omitted
•Brain activity is sensitive to audiovisual (AV) speech delays•Auditory evoked responses track temporal AV speech delays•Spatially synchronized nested networks track audiovisual speech asynchronies•AV temporal statistics drive top-down information transfer to auditory cortex
Neuroscience; Sensory neuroscience; Signal processing
It has long been known that human breathing is altered during listening and speaking compared to rest: during speaking, inhalation depth is adjusted to the air volume required for the upcoming ...utterance. During listening, inhalation is temporally aligned to inhalation of the speaker. While evidence for the former is relatively strong, it is virtually absent for the latter. We address both phenomena using recordings of speech envelope and respiration in 30 participants during 14 min of speaking and listening to one’s own speech. First, we show that inhalation depth is positively correlated with the total power of the speech envelope in the following utterance. Second, we provide evidence that inhalation during listening to one’s own speech is significantly more likely at time points of inhalation during speaking. These findings are compatible with models that postulate alignment of internal forward models of interlocutors with the aim to facilitate communication.
Display omitted
•Human breathing is altered during listening and speaking compared to rest•In speaking, inhalation correlates with speech envelope in the following utterance•We find similar timing of inhalations during speaking and listening to one’s own speech•Findings support hypothesized alignment of internal forward models of interlocutors
Behavioral neuroscience; Cognitive neuroscience
Abstract
When we attentively listen to an individual’s speech, our brain activity dynamically aligns to the incoming acoustic input at multiple timescales. Although this systematic alignment between ...ongoing brain activity and speech in auditory brain areas is well established, the acoustic events that drive this phase-locking are not fully understood. Here, we use magnetoencephalographic recordings of 24 human participants (12 females) while they were listening to a 1 h story. We show that whereas speech–brain coupling is associated with sustained acoustic fluctuations in the speech envelope in the theta-frequency range (4–7 Hz), speech tracking in the low-frequency delta (below 1 Hz) was strongest around onsets of speech, like the beginning of a sentence. Crucially, delta tracking in bilateral auditory areas was not sustained after onsets, proposing a delta tracking during continuous speech perception that is driven by speech onsets. We conclude that both onsets and sustained components of speech contribute differentially to speech tracking in delta- and theta-frequency bands, orchestrating sampling of continuous speech. Thus, our results suggest a temporal dissociation of acoustically driven oscillatory activity in auditory areas during speech tracking, providing valuable implications for orchestration of speech tracking at multiple time scales.
Speech production and perception are fundamental processes of human cognition that both rely on intricate processing mechanisms that are still poorly understood. Here, we study these processes by ...using magnetoencephalography (MEG) to comprehensively map connectivity of regional brain activity within the brain and to the speech envelope during continuous speaking and listening. Our results reveal not only a partly shared neural substrate for both processes but also a dissociation in space, delay, and frequency. Neural activity in motor and frontal areas is coupled to succeeding speech in delta band (1 to 3 Hz), whereas coupling in the theta range follows speech in temporal areas during speaking. Neural connectivity results showed a separation of bottom-up and top-down signalling in distinct frequency bands during speaking. Here, we show that frequency-specific connectivity channels for bottom-up and top-down signalling support continuous speaking and listening. These findings further shed light on the complex interplay between different brain regions involved in speech production and perception.