Accuracy in copying a figure is one of the most sensitive measures of visuo-constructional ability. However, drawing tasks also involve other cognitive and motor abilities, which may influence the ...final graphic produced. Nevertheless, these aspects are not taken into account in conventional scoring methodologies. In this study, we have implemented a novel Tablet-based assessment, acquiring data and information for the entire execution of the Rey Complex Figure copy task (T-RCF). This system extracts 12 indices capturing various dimensions of drawing abilities. We have also analysed the structure of relationships between these indices and provided insights into the constructs that they capture. 102 healthy adults completed the T-RCF. A subgroup of 35 participants also completed a paper-and-pencil drawing battery from which constructional, procedural, and motor measures were obtained. Principal component analysis of the T-RCF indices was performed, identifying spatial, procedural and kinematic components as distinct dimensions of drawing execution. Accordingly, a composite score for each dimension was determined. Correlational analyses provided indications of their validity by showing that spatial, procedural, and kinematic scores were associated with constructional, organisational and motor measures of drawing, respectively. Importantly, final copy accuracy was found to be associated with all of these aspects of drawing. In conclusion, copying complex figures entails an interplay of multiple functions. T-RCF provides a unique opportunity to analyse the entire drawing process and to extract scores for three critical dimensions of drawing execution.
Word frequency is one of the best predictors of language processing. Typically, word frequency norms are entirely based on natural-language text data, thus representing what the literature typically ...refers to as purely
linguistic
experience. This study presents Flickr frequency norms as a novel word frequency measure from a domain-specific corpus inherently tied to extra-linguistic information: words used as image tags on social media. To obtain Flickr frequency measures, we exploited the photo-sharing platform Flickr Image (containing billions of photos) and extracted the number of uploaded images tagged with each of the words considered in the lexicon. Here, we systematically examine the peculiarities of Flickr frequency norms and show that Flickr frequency is a hybrid metrics, lying at the intersection between language and visual experience and with specific biases induced by being based on image-focused social media. Moreover, regression analyses indicate that Flickr frequency captures additional information beyond what is already encoded in existing norms of linguistic, sensorimotor, and affective experience. Therefore, these new norms capture aspects of language usage that are missing from traditional frequency measures: a portion of language usage capturing the interplay between language and vision, which – this study demonstrates – has its own impact on word processing. The Flickr frequency norms are openly available on the Open Science Framework (https://osf.io/2zfs3/).
•Data-driven computational models provide valid estimates of semantic similarity.•Image-based and text-based similarity estimates are tested in semantic priming.•Visually-grounded properties of ...concepts automatically influence semantic processing.•Visual and linguistic information independently contribute to conceptual processing.
In their strongest formulation, theories of grounded cognition claim that concepts are made up of sensorimotor information. Following such equivalence, perceptual properties of objects should consistently influence processing, even in purely linguistic tasks, where perceptual information is neither solicited nor required. Previous studies have tested this prediction in semantic priming tasks, but they have not observed perceptual influences on participants’ performances. However, those findings suffer from critical shortcomings, which may have prevented potential visually grounded/perceptual effects from being detected. Here, we investigate this topic by applying an innovative method expected to increase the sensitivity in detecting such perceptual effects. Specifically, we adopt an objective, data-driven, computational approach to independently quantify vision-based and language-based similarities for prime-target pairs on a continuous scale. We test whether these measures predict behavioural performance in a semantic priming mega-study with various experimental settings. Vision-based similarity is found to facilitate performance, but a dissociation between vision-based and language-based effects was also observed. Thus, in line with theories of grounded cognition, perceptual properties can facilitate word processing even in purely linguistic tasks, but the behavioural dissociation at the same time challenges strong claims of sensorimotor and conceptual equivalence.
Visual search can be guided by top-down and bottom-up processes, with either one dominating the other depending on the task (e.g., feature versus conjunction). Moreover, different search tasks bring ...about different expectations about the type, or frequency, of distractor stimuli. These expectations could promote top-down “task-sets” that may impact performance even when distractors are temporarily absent. Here, we characterized the role and extent of recruitment of proactive top-down processes for distractor expectation in feature and conjunction search. Participants conducted feature and conjunction search tasks for a visual target among distractors, which were either frequently presented or completely absent. The effects of the recruitment of proactive top-down processes for distractor expectation entailed slower responses, yet more accurate, on distractor-absent trials in the frequent-distractor (versus no-distractor) context of both tasks. These effects were larger in the conjunction versus feature task and were not impacted by stimulus duration and time pressure (short/present in Experiment 1, unlimited/absent in Experiment 2, respectively). Results were replicated when the presence/absence of distractors at each trial was fully predictable (Experiment 3), and when several parameters of visual search were changed (Experiment 4). Our findings indicate that top-down task-sets related to distractor expectation entail performance costs and benefits in visual search. These effects occur throughout task blocks rather than trial-to-trial, are modulated by search type, and confirm that proactive top-down processes intervene in feature search.
Abstract The paper-and-pencil Rey–Osterrieth Complex Figure (ROCF) copy task has been extensively used to assess visuo-constructional skills in children and adults. The scoring systems utilized in ...clinical practice provide an integrated evaluation of the drawing process, without differentiating between its visuo-constructional, organizational, and motor components. Here, a tablet-based ROCF copy task capable of providing a quantitative assessment of the drawing process, differentiating between visuo-constructional, organizational, and motor skills, is trialed in 94 healthy children, between 7 and 11 years of age. Through previously validated algorithms, 12 indices of performance in the ROCF copy task were obtained for each child. Principal component analysis of the 12 indices identified spatial, procedural, and kinematic components as distinct dimensions of the drawing process. A composite score for each dimension was determined, and correlation analysis between composite scores and conventional paper-and-pencil measures of visuo-constructional, procedural, and motor skills performed. The results obtained confirmed that the constructional, organizational, and motor dimensions underlie complex figure drawing in children; and that each dimension can be measured by a unique composite score. In addition, the composite scores here obtained from children were compared with previsions results from adults, offering a novel insight into how the interplay between the three dimensions of drawing evolves with age.
The formation of false memories is one of the most widely studied topics in cognitive psychology. The Deese–Roediger–McDermott (DRM) paradigm is a powerful tool for investigating false memories and ...revealing the cognitive mechanisms subserving their formation. In this task, participants first memorize a list of words (encoding phase) and next have to indicate whether words presented in a new list were part of the initially memorized one (recognition phase). By employing DRM lists optimized to investigate semantic effects, previous studies highlighted a crucial role of semantic processes in false memory generation, showing that new words semantically related to the studied ones tend to be more erroneously recognized (compared to new words less semantically related). Despite the strengths of the DRM task, this paradigm faces a major limitation in list construction due to its reliance on human-based association norms, posing both practical and theoretical concerns. To address these issues, we developed the False Memory Generator (FMG), an automated and data-driven tool for generating DRM lists, which exploits similarity relationships between items populating a vector space. Here, we present FMG and demonstrate the validity of the lists generated in successfully replicating well-known semantic effects on false memory production. FMG potentially has broad applications by allowing for testing false memory production in domains that go well beyond the current possibilities, as it can be in principle applied to any vector space encoding properties related to word referents (e.g., lexical, orthographic, phonological, sensory, affective, etc.) or other type of stimuli (e.g., images, sounds, etc.).
Sustained attention is a fundamental prerequisite for all cognitive functions and its impairment is a common aftermath of both developmental and acquired neurological disorders. To date, all the ...sustained attention tasks rely heavily on selective attention to external stimuli. The interaction between selective and sustained attention represents a limit in the field of assessment and may mislead researchers or distort conclusions. The aim of the present perspective study was to propose a sustained version of the Paced Finger Tapping (S-PFT) test as a novel approach to measure sustained attention that does not leverage external stimuli. Here, we administered S-PFT and other attentional tasks (visual sustained attention, visuospatial attention capacity, selective attention, and divided attention tasks) to 85 adolescents. Thus, we provide evidence suggesting that S-PFT is effective in causing performance decrement over time, an important trademark of sustained attention tasks. We also present descriptive statistics showing the relationship between S-PFT and the other attentional tasks. These analyses show that, unlike visual sustained attention tests, performances to our task of internal sustained attention were not correlated to measures of selective attention and visuospatial attention capacity. Our results suggest that S-PFT could represent a promising and alternative tool both for empirical research and clinical assessment of sustained attention.
Normative measures of verbal material are fundamental in psycholinguistic and cognitive research for the control of confounding in experimental procedures and for achieving a better comprehension of ...our conceptual system. Traditionally, normative studies have focused on classical psycholinguistic variables, such as concreteness and imageability. Recent works have shifted researchers’ focus to perceptual strength, in which items are rated separately for each of the five senses. We present a resource that includes perceptual norms for 1,121 Italian words extracted from the Italian version of ANEW. Norms were collected from 57 native speakers. For each word, the participants provided perceptual-strength ratings for each of the five perceptual modalities. The perceptual norms performance in predicting human behavior was tested in two novel experiments, a lexical decision task and a naming task. Concreteness, imageability, and different composite variables representing perceptual-strength scores were considered as competing predictors in a series of linear regressions, evaluating the goodness of fit of each model. For both tasks, the model with
imageability
as the only predictor was found to be the best-fitting model according to the Akaike information criterion, whereas the model with the separately considered
five modalities
better described data according to the explained variance. These results differ from the ones previously reported for English, in which maximum perceptual strength emerged as the best predictor of behavior. We investigated this discrepancy by comparing Italian and English data for the same set of translated items, thus confirming a genuine cross-linguistic effect. We thus confirmed that perceptual experience influences linguistic processing, even though evaluations from different languages are needed to generalize this claim.
•Perceptually-grounded properties of constituents affect compound processing.•Automatic perceptually-grounded conceptual combination irrespective of task demands.•We propose a computational model of ...perceptually-grounded conceptual combination.
Previous studies found that an automatic meaning-composition process affects the processing of morphologically complex words, and related this operation to conceptual combination. However, research on embodied cognition demonstrates that concepts are more than just lexical meanings, rather being also grounded in perceptual experience. Therefore, perception-based information should also be involved in mental operations on concepts, such as conceptual combination. Consequently, we should expect to find perceptual effects in the processing of morphologically complex words. In order to investigate this hypothesis, we present the first fully-implemented and data-driven model of perception-based (more specifically, vision-based) conceptual combination, and use the predictions of such a model to investigate processing times for compound words in four large-scale behavioral experiments employing three paradigms (naming, lexical decision, and timed sensibility judgments). We observe facilitatory effects of vision-based compositionality in all three paradigms, over and above a strong language-based (lexical and semantic) baseline, thus demonstrating for the first time perceptually grounded effects at the sub-lexical level. This suggests that perceptually-grounded information is not only utilized according to specific task demands but rather automatically activated when available.
Quantitative, data-driven models for mental representations have long enjoyed popularity and success in psychology (e.g., distributional semantic models in the language domain), but have largely been ...missing for the visual domain. To overcome this, we present ViSpa (Vision Spaces), high-dimensional vector spaces that include vision-based representation for naturalistic images as well as concept prototypes. These vectors are derived directly from visual stimuli through a deep convolutional neural network trained to classify images and allow us to compute vision-based similarity scores between any pair of images and/or concept prototypes. We successfully evaluate these similarities against human behavioral data in a series of large-scale studies, including off-line judgments-visual similarity judgments for the referents of word pairs (Study 1) and for image pairs (Study 2), and typicality judgments for images given a label (Study 3)-as well as online processing times and error rates in a discrimination (Study 4) and priming task (Study 5) with naturalistic image material. ViSpa similarities predict behavioral data across all tasks, which renders ViSpa a theoretically appealing model for vision-based representations and a valuable research tool for data analysis and the construction of experimental material: ViSpa allows for precise control over experimental material consisting of images and/or words denoting imageable concepts and introduces a specifically vision-based similarity for word pairs. To make ViSpa available to a wide audience, this article (a) includes (video) tutorials on how to use ViSpa in R and (b) presents a user-friendly web interface at http://vispa.fritzguenther.de.