Componential theories of lexical semantics assume that concepts can be represented by sets of features or attributes that are in some sense primitive or basic components of meaning. The binary ...features used in classical category and prototype theories are problematic in that these features are themselves complex concepts, leaving open the question of what constitutes a primitive feature. The present availability of brain imaging tools has enhanced interest in how concepts are represented in brains, and accumulating evidence supports the claim that these representations are at least partly "embodied" in the perception, action, and other modal neural systems through which concepts are experienced. In this study we explore the possibility of devising a componential model of semantic representation based entirely on such functional divisions in the human brain. We propose a basic set of approximately 65 experiential attributes based on neurobiological considerations, comprising sensory, motor, spatial, temporal, affective, social, and cognitive experiences. We provide normative data on the salience of each attribute for a large set of English nouns, verbs, and adjectives, and show how these attribute vectors distinguish a priori conceptual categories and capture semantic similarity. Robust quantitative differences between concrete object categories were observed across a large number of attribute dimensions. A within- versus between-category similarity metric showed much greater separation between categories than representations derived from distributional (latent semantic) analysis of text. Cluster analyses were used to explore the similarity structure in the data independent of a priori labels, revealing several novel category distinctions. We discuss how such a representation might deal with various longstanding problems in semantic theory, such as feature selection and weighting, representation of abstract concepts, effects of context on semantic retrieval, and conceptual combination. In contrast to componential models based on verbal features, the proposed representation systematically relates semantic content to large-scale brain networks and biologically plausible accounts of concept acquisition.
► We investigated the role of the motor system in action-related language processing. ► PD patients and controls performed lexical decision and semantic judgment tasks. ► The words were either action ...verbs or abstract verbs. ► PD patients were more impaired in action verb processing in both tasks. ► The motor system plays a causal role in action language semantics.
The problem of how word meaning is processed in the brain has been a topic of intense investigation in cognitive neuroscience. While considerable correlational evidence exists for the involvement of sensory-motor systems in conceptual processing, it is still unclear whether they play a causal role. We investigated this issue by comparing the performance of patients with Parkinson’s disease (PD) with that of age-matched controls when processing action and abstract verbs. To examine the effects of task demands, we used tasks in which semantic demands were either implicit (lexical decision and priming) or explicit (semantic similarity judgment). In both tasks, PD patients’ performance was selectively impaired for action verbs (relative to controls), indicating that the motor system plays a more central role in the processing of action verbs than in the processing of abstract verbs. These results argue for a causal role of sensory-motor systems in semantic processing.
The brain is thought to combine linguistic knowledge of words and nonlinguistic knowledge of their referents to encode sentence meaning. However, functional neuroimaging studies aiming at decoding ...language meaning from neural activity have mostly relied on distributional models of word semantics, which are based on patterns of word co-occurrence in text corpora. Here, we present initial evidence that modeling nonlinguistic "experiential" knowledge contributes to decoding neural representations of sentence meaning. We model attributes of peoples' sensory, motor, social, emotional, and cognitive experiences with words using behavioral ratings. We demonstrate that fMRI activation elicited in sentence reading is more accurately decoded when this experiential attribute model is integrated with a text-based model than when either model is applied in isolation (participants were 5 males and 9 females). Our decoding approach exploits a representation-similarity-based framework, which benefits from being parameter free, while performing at accuracy levels comparable with those from parameter fitting approaches, such as ridge regression. We find that the text-based model contributes particularly to the decoding of sentences containing linguistically oriented "abstract" words and reveal tentative evidence that the experiential model improves decoding of more concrete sentences. Finally, we introduce a cross-participant decoding method to estimate an upper bound on model-based decoding accuracy. We demonstrate that a substantial fraction of neural signal remains unexplained, and leverage this gap to pinpoint characteristics of weakly decoded sentences and hence identify model weaknesses to guide future model development.
Language gives humans the unique ability to communicate about historical events, theoretical concepts, and fiction. Although words are learned through language and defined by their relations to other words in dictionaries, our understanding of word meaning presumably draws heavily on our nonlinguistic sensory, motor, interoceptive, and emotional experiences with words and their referents. Behavioral experiments lend support to the intuition that word meaning integrates aspects of linguistic and nonlinguistic "experiential" knowledge. However, behavioral measures do not provide a window on how meaning is represented in the brain and tend to necessitate artificial experimental paradigms. We present a model-based approach that reveals early evidence that experiential and linguistically acquired knowledge can be detected in brain activity elicited in reading natural sentences.
The organization of semantic memory, including memory for word meanings, has long been a central question in cognitive science. Although there is general agreement that word meaning representations ...must make contact with sensory-motor and affective experiences in a non-arbitrary fashion, the nature of this relationship remains controversial. One prominent view proposes that word meanings are represented directly in terms of their experiential content (i.e., sensory-motor and affective representations). Opponents of this view argue that the representation of word meanings reflects primarily taxonomic structure, that is, their relationships to natural categories. In addition, the recent success of language models based on word co-occurrence (i.e., distributional) information in emulating human linguistic behavior has led to proposals that this kind of information may play an important role in the representation of lexical concepts. We used a semantic priming paradigm designed for representational similarity analysis (RSA) to quantitatively assess how well each of these theories explains the representational similarity pattern for a large set of words. Crucially, we used partial correlation RSA to account for intercorrelations between model predictions, which allowed us to assess, for the first time, the unique effect of each model. Semantic priming was driven primarily by experiential similarity between prime and target, with no evidence of an independent effect of distributional or taxonomic similarity. Furthermore, only the experiential models accounted for unique variance in priming after partialling out explicit similarity ratings. These results support experiential accounts of semantic representation and indicate that, despite their good performance at some linguistic tasks, the distributional models evaluated here do not encode the same kind of information used by the human semantic system.
Display omitted
•We used RSA to evaluate three major theories of word meaning representation.•Automatic semantic priming was measured item-wise with high reliability.•Results strongly support representation in terms of experiential information.•Distributional information did not independently contribute to semantic priming.•RSA and semantic priming can be used to determine the featural content of concepts.
The embodied cognition approach to the study of the mind proposes that higher order mental processes such as concept formation and language are essentially based on perceptual and motor processes. ...Contrary to the classical approach in cognitive science, in which concepts are viewed as amodal, arbitrary symbols, embodied semantics argues that concepts must be “grounded” in sensorimotor experiences in order to have meaning. In line with this view, neuroimaging studies have shown a roughly somatotopic pattern of activation along cortical motor areas (broadly construed) for the observation of actions involving different body parts, as well as for action-related language comprehension. These findings have been interpreted in terms of a mirror-neuron system, which automatically matches observed and executed actions. However, the somatotopic pattern of activation found in these studies is very coarse, with significant overlap between body parts, and sometimes with multiple representations for the same body part. Furthermore, the localization of the respective activations varies considerably across studies. Based on recent work on the motor cortex in monkeys, we suggest that these discrepancies result from the organization of the primate motor cortex (again, broadly construed), which probably includes maps of the coordinated actions making up the individual’s motor repertoire, rather than a single, continuous map of the body. We review neurophysiological and neuroimaging data supporting this hypothesis and discuss ways in which this framework can be used to further test the links between neural mirroring and linguistic processing.
•There is strong evidence that the DMN is involved in semantic cognition.•DMN areas contribute to semantics at the word, sentence, and discourse levels.•The DMN enables the construction of embodied ...situation models.•Situation models are implemented as dynamic recurrent neural assemblies.•These assemblies include modality-specific, multimodal and DMN cortical areas.
This review examines whether and how the “default mode” network (DMN) contributes to semantic processing. We review evidence implicating the DMN in the processing of individual word meanings and in sentence- and discourse-level semantics. Next, we argue that the areas comprising the DMN contribute to semantic processing by coordinating and integrating the simultaneous activity of local neuronal ensembles across multiple unimodal and multimodal cortical regions, creating a transient, global neuronal ensemble. The resulting ensemble implements an integrated simulation of phenomenological experience – that is, an embodied situation model – constructed from various modalities of experiential memory traces. These situation models, we argue, are necessary not only for semantic processing but also for aspects of cognition that are not traditionally considered semantic. Although many aspects of this proposal remain provisional, we believe it provides new insights into the relationships between semantic and non-semantic cognition and into the functions of the DMN.
The nature of the representational code underlying conceptual knowledge remains a major unsolved problem in cognitive neuroscience. We assessed the extent to which different representational systems ...contribute to the instantiation of lexical concepts in high-level, heteromodal cortical areas previously associated with semantic cognition. We found that lexical semantic information can be reliably decoded from a wide range of heteromodal cortical areas in the frontal, parietal, and temporal cortex. In most of these areas, we found a striking advantage for experience-based representational structures (i.e., encoding information about sensory-motor, affective, and other features of phenomenal experience), with little evidence for independent taxonomic or distributional organization. These results were found independently for object and event concepts. Our findings indicate that concept representations in the heteromodal cortex are based, at least in part, on experiential information. They also reveal that, in most heteromodal areas, event concepts have more heterogeneous representations (i.e., they are more easily decodable) than object concepts and that other areas beyond the traditional "semantic hubs" contribute to semantic cognition, particularly the posterior cingulate gyrus and the precuneus.
Neuroimaging, neuropsychological, and psychophysical evidence indicate that concept retrieval selectively engages specific sensory and motor brain systems involved in the acquisition of the retrieved ...concept. However, it remains unclear which supramodal cortical regions contribute to this process and what kind of information they represent. Here, we used representational similarity analysis of two large fMRI datasets with a searchlight approach to generate a detailed map of human brain regions where the semantic similarity structure across individual lexical concepts can be reliably detected. We hypothesized that heteromodal cortical areas typically associated with the default mode network encode multimodal experiential information about concepts, consistent with their proposed role as cortical integration hubs. In two studies involving different sets of concepts and different participants (both sexes), we found a distributed, bihemispheric network engaged in concept representation, composed of high-level association areas in the anterior, lateral, and ventral temporal lobe; inferior parietal lobule; posterior cingulate gyrus and precuneus; and medial, dorsal, ventrolateral, and orbital prefrontal cortex. In both studies, a multimodal model combining sensory, motor, affective, and other types of experiential information explained significant variance in the neural similarity structure observed in these regions that was not explained by unimodal experiential models or by distributional semantics (i.e., word2vec similarity). These results indicate that during concept retrieval, lexical concepts are represented across a vast expanse of high-level cortical regions, especially in the areas that make up the default mode network, and that these regions encode multimodal experiential information.
Conceptual knowledge includes information acquired through various modalities of experience, such as visual, auditory, tactile, and emotional information. We investigated which brain regions encode mental representations that combine information from multiple modalities when participants think about the meaning of a word. We found that such representations are encoded across a widely distributed network of cortical areas in both hemispheres, including temporal, parietal, limbic, and prefrontal association areas. Several areas not traditionally associated with semantic cognition were also implicated. Our results indicate that the retrieval of conceptual knowledge during word comprehension relies on a much larger portion of the cerebral cortex than previously thought and that multimodal experiential information is represented throughout the entire network.
The neural representation of body part concepts Mazurchuk, Stephen; Fernandino, Leonardo; Tong, Jia-Qing ...
Cerebral cortex (New York, N.Y. 1991),
06/2024, Letnik:
34, Številka:
6
Journal Article
Recenzirano
Abstract Neuropsychological and neuroimaging studies provide evidence for a degree of category-related organization of conceptual knowledge in the brain. Some of this evidence indicates that body ...part concepts are distinctly represented from other categories; yet, the neural correlates and mechanisms underlying these dissociations are unclear. We expand on the limited prior data by measuring functional magnetic resonance imaging responses induced by body part words and performing a series of analyses investigating the cortical representation of this semantic category. Across voxel-level contrasts, pattern classification, representational similarity analysis, and vertex-wise encoding analyses, we find converging evidence that the posterior middle temporal gyrus, the supramarginal gyrus, and the ventral premotor cortex in the left hemisphere play important roles in the preferential representation of this category compared to other concrete objects.