The article provides an overview of the main reports presented at the round table on neurolinguistic research issues in the field of language education, organized within the framework of the 5th ...Inter-University Scientific and Practical Conference “Traditions and Innovations in Foreign Language Teaching in Non-Linguistic Universities” (MGIMO University, Moscow, Russia).
The argument structure (AS) is the information that verbs have about syntactic and thematic roles. The objective of this work is to investigate how the effect of the complexity of EA is manifested in ...a Sentence Elicitation and a Grammatical Judgments task. A group of 62 native speakers of Rioplatense Spanish of different ages and levels of education participated in the study. The stimuli were made considering the number of obligatory arguments that a verb requires: 1, 2, and 3, and the canonicity in the syntactic-semantic mapping (intransitive inacusative vs. inergative). The findings suggest that the greater the number of arguments, the greater the difficulties, especially in subjects with low schooling. No differences were found between intransitive verbs. Results will be discussed in light of linguistic and neuropsycholinguistic hypotheses. The tasks from this study are part of a battery of morphosyntax assessment for people with aphasia.
To derive meaning from speech, we must extract multiple dimensions of concurrent information from incoming speech signals. That is, equally important to processing phonetic features is the detection ...of acoustic cues that give structure and context to the information we hear. How the brain organizes this information is unknown. Using data-driven computational methods on high-density intracranial recordings from 27 human participants, we reveal the functional distinction of neural responses to speech in the posterior superior temporal gyrus according to either onset or sustained response profiles. Though similar response types have been observed throughout the auditory system, we found novel evidence for a major spatial parcellation in which a distinct caudal zone detects acoustic onsets and a rostral-surround zone shows sustained, relatively delayed responses to ongoing speech stimuli. While posterior onset and anterior sustained responses are used substantially during natural speech perception, they are not limited to speech stimuli and are seen even for reversed or spectrally rotated speech. Single-electrode encoding of phonetic features in each zone depended upon whether the sound occurred at sentence onset, suggesting joint encoding of phonetic features and their temporal context. Onset responses in the caudal zone could accurately decode sentence and phrase onset boundaries, providing a potentially important internal mechanism for detecting temporal landmarks in speech and other natural sounds. These findings suggest that onset and sustained responses not only define the basic spatial organization of high-order auditory cortex but also have direct implications for how speech information is parsed in the cortex.
Display omitted
Display omitted
•Human STG is divided into two regions with onset and sustained responses to speech•Onset selective regions are located posteriorly, and sustained are more anterior•These response properties are the main organizing feature of the STG, not phonemes•Onset and sustained electrodes determine sentence start and identity in a decoder
Hamilton, Edwards, and Chang use a combination of unsupervised and supervised methods on high-density intracranial recordings to reveal a spatially localized region of the posterior superior temporal gyrus that specifically parses acoustic onsets and an anterior region that exhibits sustained responses to speech.
Although sentences unfold sequentially, one word at a time, most linguistic theories propose that their underlying syntactic structure involves a tree of nested phrases rather than a linear sequence ...of words. Whether and how the brain builds such structures, however, remains largely unknown. Here, we used human intracranial recordings and visual word-by-word presentation of sentences and word lists to investigate how left-hemispheric brain activity varies during the formation of phrase structures. In a broad set of language-related areas, comprising multiple superior temporal and inferior frontal sites, high-gamma power increased with each successive word in a sentence but decreased suddenly whenever words could be merged into a phrase. Regression analyses showed that each additional word or multiword phrase contributed a similar amount of additional brain activity, providing evidence for amerge operation that applies equally to linguistic objects of arbitrary complexity. More superficial models of language, based solely on sequential transition probability over lexical and syntactic categories, only captured activity in the posterior middle temporal gyrus. Formal model comparison indicated that the model of multiword phrase construction provided a better fit than probability-based models at most sites in superior temporal and inferior frontal cortices. Activity in those regions was consistent with a neural implementation of a bottom-up or left-corner parser of the incoming language stream. Our results provide initial intracranial evidence for the neurophysiological reality of the merge operation postulated by linguists and suggest that the brain compresses syntactically well-formed sequences of words into a hierarchy of nested phrases.
In this brief commentary, we propose the emotion word type that has not been elucidated in the review (Hinojosa, J. A., Moreno, E. M., & Ferré, P. (2019). Affective neurolinguistics: Towards a ...framework for reconciling language and emotion. Language, Cognition and Neuroscience. Advance online publication. doi:10.1080/23273798.2019.1620957) should be incorporated in affective neurolinguistics. Emotion words, as a category against neutral words, are a mixture of two sub types: emotion-label words (e.g. joy, sorrow) and emotion-laden words (e.g. reward, snake). Differences between the two kinds of words have been confirmed in numerous studies. The discrepancy of the two types of words has the potential of contributing to the definition and categorisation of emotion words and provides a new interface for affective neurolinguistics, affective neuroscience, and cognitive neuroscience.
Being able to speak and/or understand multiple languages is a ubiquitous human behavior. Over the past decades in particular, an increasing amount of research has investigated the acquisition, ...processing, and use of multiple languages as well as how variation therein associates with differential cognitive performance, brain functions and structures (see Bialystok, 2016, Bialystok, 2017, De Houwer, 2021, Fricke et al., 2019, Grundy and Timmer, 2017, Kroll and Bialystok, 2013, Li and Dong, 2020, Sulpizio et al., 2020 for reviews). Taken together, this research strongly suggests that these behavioral and neural consequences reflect individual differences in how one adapts to her environment through multilingualism. Paying homage to the reality of language diversities around the world, we have opted to use herein the term multilingualism, as opposed to simply bilingualism, given that linguistic experiences can, and often do, extend beyond managing only two languages on a daily basis. The present special issue presents a collection of 15 papers examining the linguistic, cognitive and neural consequences of multilingualism, using innovative approaches to characterize relevant experiences.
Efforts to understand the brain bases of language face the Mapping Problem: At what level do linguistic computations and representations connect to human neurobiology? We review one approach to this ...problem that relies on rigorously defined computational models to specify the links between linguistic features and neural signals. Such tools can be used to estimate linguistic predictions, model linguistic features, and specify a sequence of processing steps that may be quantitatively fit to neural signals collected while participants use language. Progress has been helped by advances in machine learning, attention to linguistically interpretable models, and openly shared data sets that allow researchers to compare and contrast a variety of models. We describe one such data set in detail in the
Supplemental Appendix
.