•Perceptually-grounded properties of constituents affect compound processing.•Automatic perceptually-grounded conceptual combination irrespective of task demands.•We propose a computational model of ...perceptually-grounded conceptual combination.
Previous studies found that an automatic meaning-composition process affects the processing of morphologically complex words, and related this operation to conceptual combination. However, research on embodied cognition demonstrates that concepts are more than just lexical meanings, rather being also grounded in perceptual experience. Therefore, perception-based information should also be involved in mental operations on concepts, such as conceptual combination. Consequently, we should expect to find perceptual effects in the processing of morphologically complex words. In order to investigate this hypothesis, we present the first fully-implemented and data-driven model of perception-based (more specifically, vision-based) conceptual combination, and use the predictions of such a model to investigate processing times for compound words in four large-scale behavioral experiments employing three paradigms (naming, lexical decision, and timed sensibility judgments). We observe facilitatory effects of vision-based compositionality in all three paradigms, over and above a strong language-based (lexical and semantic) baseline, thus demonstrating for the first time perceptually grounded effects at the sub-lexical level. This suggests that perceptually-grounded information is not only utilized according to specific task demands but rather automatically activated when available.
Different experiential traces (i.e., linguistic, motor, and perceptual) are likely contributing to the organization of human semantic knowledge. Here, we aimed to address this issue by investigating ...whether visual experience may affect the sensitivity to distributional priors from natural language. We conducted an independent reanalysis of data from Bottini et al., in which early blind and sighted participants performed an auditory lexical decision task. Since previous research has shown that semantic neighborhood density-the mean distance between a target word and its closest semantic neighbors-can influence performance in lexical decision tasks, we investigated whether vision may alter the reliance on this semantic index. We demonstrate that early blind participants are more sensitive to semantic neighborhood density than sighted participants, as indicated by the significantly faster response times for words with higher levels of semantic neighborhood density shown by the blind group. These findings suggest that an early lack of visual experience may lead to enhanced sensitivity to the distributional history of words in natural language, deepening in turn our understanding of the strict interplay between linguistic and perceptual experience in the organization of conceptual knowledge.Different experiential traces (i.e., linguistic, motor, and perceptual) are likely contributing to the organization of human semantic knowledge. Here, we aimed to address this issue by investigating whether visual experience may affect the sensitivity to distributional priors from natural language. We conducted an independent reanalysis of data from Bottini et al., in which early blind and sighted participants performed an auditory lexical decision task. Since previous research has shown that semantic neighborhood density-the mean distance between a target word and its closest semantic neighbors-can influence performance in lexical decision tasks, we investigated whether vision may alter the reliance on this semantic index. We demonstrate that early blind participants are more sensitive to semantic neighborhood density than sighted participants, as indicated by the significantly faster response times for words with higher levels of semantic neighborhood density shown by the blind group. These findings suggest that an early lack of visual experience may lead to enhanced sensitivity to the distributional history of words in natural language, deepening in turn our understanding of the strict interplay between linguistic and perceptual experience in the organization of conceptual knowledge.
In this study, we use temporally aligned word embeddings and a large diachronic corpus of English to quantify language change in a data‐driven, scalable way, which is grounded in language use. We ...show a unique and reliable relation between measures of language change and age of acquisition (AoA) while controlling for frequency, contextual diversity, concreteness, length, dominant part of speech, orthographic neighborhood density, and diachronic frequency variation. We analyze measures of language change tackling both the change in lexical representations and the change in the relation between lexical representations and the words with the most similar usage patterns, showing that they capture different aspects of language change. Our results show a unique relation between language change and AoA, which is stronger when considering neighborhood‐level measures of language change: Words with more coherent diachronic usage patterns tend to be acquired earlier. The results support theories positing a link between ontogenetic and ethnogenetic processes in language.
In the present study, the role of phonological information in visual word recognition is investigated by adopting a large-scale data-driven approach that exploits a new consistency measure based on ...distributional semantics methods. A recent study by Marelli, Amenta, and Crepaldi (
2015
) showed that the consistency between an orthographic string and the meanings to which it is associated in a large corpus is a relevant predictor in lexical decision experiments. Exploiting irregular mappings between orthography and phonology in English, we were able to compute a phonology-to-semantics consistency measure that dissociates from its orthographic counterpart and tested both measures on lexical decision data taken from the British Lexicon Project (Keuleers et al.,
2012
). Results showed that both orthography and phonology are activated during visual word recognition. However, their contribution is crucially determined by the extent to which they are informative of the word semantics, and phonology plays a crucial role in accessing word meaning.
Theories of grounded cognition assume that conceptual representations are grounded in sensorimotor experience. However, abstract concepts such as
jealousy
or
childhood
have no directly associated ...referents with which such sensorimotor experience can be made; therefore, the grounding of abstract concepts has long been a topic of debate. Here, we propose (a) that systematic relations exist between semantic representations learned from language on the one hand and perceptual experience on the other hand, (b) that these relations can be learned in a bottom-up fashion, and (c) that it is possible to extrapolate from this learning experience to predict expected perceptual representations for words even where direct experience is missing. To test this, we implement a data-driven computational model that is trained to map language-based representations (obtained from text corpora, representing language experience) onto vision-based representations (obtained from an image database, representing perceptual experience), and apply its mapping function onto language-based representations for abstract and concrete words outside the training set. In three experiments, we present participants with these words, accompanied by two images: the image predicted by the model and a random control image. Results show that participants’ judgements were in line with model predictions even for the most abstract words. This preference was stronger for more concrete items and decreased for the more abstract ones. Taken together, our findings have substantial implications in support of the grounding of abstract words, suggesting that we can tap into our previous experience to create possible visual representation we don’t have.
Most printed Chinese words are compounds built from the combination of meaningful characters. Yet, there is a poor understanding of how individual characters contribute to the recognition of ...compounds. Using a megastudy of Chinese word recognition (Tse et al., 2017), we examined how the lexical decision of existing and novel Chinese compounds was influenced by two properties of individual characters: family size (the number of distinct words that embed a character) and family semantic consistency (the average semantic relatedness between a character and all words containing it). Results revealed that both variables influence word and nonword processing: Words are recognized more quickly and accurately when they contain characters that occur frequently across different words and that make consistent meaningful contributions to those words, while nonwords containing those types of characters are rejected more slowly. These findings suggest that the learning of individual characters is based not only on the quantity of experience with them but also on the reliability of the semantic information they communicate. In addition, readers are able to generalize character knowledge acquired from previous word experiences to their daily encounters with familiar and unfamiliar words. We close by discussing how word experience shapes character knowledge when different ways of calculating family properties are considered.
In this eye-tracking study, we investigated how semantics inform morphological analysis at the early stages of visual word identification in sentence reading. We exploited a feature of several ...derived Italian words, that is, that they can be read in a "morphologically transparent" way or in a "morphologically opaque" way according to the sentence context to which they belong. This way, each target word was embedded in a sentence eliciting either its transparent or opaque interpretation. We analyzed whether the effect of stem frequency changes according to whether the (very same) word is read as a genuine derivation (transparent context) versus as a pseudoderived word (opaque context). Analysis of the first fixation durations revealed a stem-word frequency effect in both opaque and transparent contexts, thus showing that stems were accessed whether or not they contributed to word meaning, that is, word decomposition is indeed blind to semantics. However, while the stem-word frequency effect was facilitatory in the transparent context, it was inhibitory in the opaque context, thus showing an early involvement of semantic representations. This pattern of data is revealed by words with short suffixes. These results indicate that derived and pseudoderived words are segmented into their constituent morphemes also in natural reading; however, this blind-to-semantics process activates morpheme representations that are semantically connoted.
The mental time line (MTL) is a spatial continuum on which earlier events are generally associated with the left space and later events with the right space. Accordingly, past- and future-related ...words receive faster responses with, respectively, the left and the right hand. Yet, it is currently unclear whether the MTL is activated by the whole word or whether it can be triggered by more subtle sublexical cues, such as verb-endings, and whether the activation of this spatial continuum is an automatic phenomenon. The aim of this study is to test whether verb-endings do bring conceptual information that is in turn capable to activate the MTL and whether this activation holds also when the temporal information is not explicitly processed. We designed three experiments. In Experiment 1, consisting of a temporal categorization task, and in Experiment 2, consisting of a lexical decision task, we tested Italian tensed verbs (trov-avo "I found," trov-erò "I will find") and pseudo-verbs (trop-avo, trop-erò). Results of Experiment 1 showed that both tensed verbs and pseudo-verbs were spatially coded on the MTL. Results from Experiment 2 showed that the MTL is activated by the verb-endings also when temporal information was task-irrelevant (i.e., lexical decision task). Experiment 3 further clarified that the spatial-temporal congruency effect does not emerge during the evaluation of an inhomogeneous set of stimuli (i.e., when adding to the stimuli time-unrelated fillers). Overall, the present findings indicate that sublexical strings carry specific semantic information that comes into play in the generation of spatial-temporal associations. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
The exact semantic processes subserving the formation of false memories are still poorly understood. Here, we directly probed the semantic origins of false memories in a typical ...Deese-Roediger-McDermott (DRM) task, by predicting participants' performance in this task through data-driven distributional semantic models. Participants were required to study lists of words and then to perform a recognition task. Our findings indicate that the participants' performance is better accounted for by a local rather than a global strategy on the task at hand: the single lists composing the task activate specific semantic clusters that are responsible for the occurrence of false memories. In particular, memory performance followed a continuous semantic gradient, with higher false recognitions occurring for higher sematic similarity between the lures (i.e., the false memory items) and the words in the relative lists. Crucially, our findings also show that semantic memory is differently involved in veridical and false memories, with this pattern being consistent across two reanalyses of data from previous studies and being replicated in an independent experiment. We thus outline an empirically-driven theoretical framework to account for the semantic processes supporting veridical and false memories formation.