We performed three experiments to investigate whether adjectives can modulate the sensorimotor activation elicited by nouns. In Experiment 1, nouns of graspable objects were used as stimuli. ...Participants had to decide if each noun referred to a natural or artifact, by performing either a precision or a power reach‐to‐grasp movement. Response grasp could be compatible or incompatible with the grasp typically used to manipulate the objects to which the nouns referred. The results revealed faster reaction times (RTs) in compatible than in incompatible trials. In Experiment 2, the nouns were combined with adjectives expressing either disadvantageous information about object graspability (e.g., sharp) or information about object color (e.g., reddish). No difference in RTs between compatible and incompatible conditions was found when disadvantageous adjectives were used. Conversely, a compatibility effect occurred when color adjectives were combined with nouns referring to natural objects. Finally, in Experiment 3 the nouns were combined with adjectives expressing tactile or shape proprieties of the objects (e.g., long or smooth). Results revealed faster RTs in compatible than in incompatible condition for both noun categories. Taken together, our findings suggest that adjectives can shape the sensorimotor activation elicited by nouns of graspable objects, highlighting that language simulation goes beyond the single‐word level.
Vision of the body is known to affect somatosensory perception (e.g. proprioception or tactile discrimination). However, it is unknown whether visual information about one's own body size can ...influence bodily action. We tested this by measuring the maximum grip aperture (MGA) parameter of grasping while eight subjects viewed a real size, enlarged or shrunken image of their hand reaching to grasp a cylinder. In the enlarged view condition, the MGA decreased relative to real size view, as if the grasping movement was actually executed with a physically larger hand, thus requiring a smaller grip aperture to grasp the cylinder. Interestingly, MGA remained smaller even after visual feedback was removed. In contrast, no effect was found for the reduced view condition. This asymmetry may reflect the fact that enlargement of body parts is experienced more frequently than shrinkage, notably during normal growth. In conclusion, vision of the body can significantly and persistently affect the internal model of the body used for motor programming.
The activation of the mirror-neuron circuit during the observation of motor acts is thought to be the basis of human capacity to read the intentions behind the behavior of others. Growing empirical ...evidence shows a different activation of the mirror-neuron resonance mechanism depending on how much the observer and the observed agent share their motor repertoires. Here, the possible modulatory effect of physical similarity between the observer and the agent was investigated in three studies. We used a visuo-motor priming task in which participants were asked to categorize manipulable and non-manipulable objects into natural or man-made kinds after having watched precision and power reach-to-grasp movements. Physical similarity was manipulated by presenting reach-to-grasp movements performed by the hands of actors of three different age ranges that are adults of the same age as the participants, children, and elderly. Faster responses were observed in trials where power grip movements were performed by the adults and precision grip movements were performed by the elderly (Main Study). This finding is not in keeping with the idea that physical similarity shapes the mirror-neuron resonance. Instead, it suggests an effect of the kinematic organization of the reach-to-grasp movements, which systematically changed with the actor age as revealed by a kinematic analysis. The differential effect played by adult and elderly actor primes was lost when static grasping hands (Control Study 1) and reach-to-grasp movements with uniform kinematic profiles (Control Study 2) were used. Therefore, we found preliminary evidence that mirror-neuron resonance is not shaped by physical similarity but by the kinematics of the observed action. This finding is novel as it suggests that human ability to read the intentions behind the behavior of others may benefit from a mere visual processing of spatiotemporal patterns.
In the present study, we examine how person categorization conveyed by the combination of multiple cues modulates joint attention. In three experiments, we tested the combinatory effect of age, sex, ...and social status on gaze-following behaviour and pro-social attitudes. In Experiments 1 and 2, young adults were required to perform an instructed saccade towards left or right targets while viewing a to-be-ignored distracting face (female or male) gazing left or right, that could belong to a young, middle-aged, or elderly adult of high or low social status. Social status was manipulated by semantic knowledge (Experiment 1) or through visual appearance (Experiment 2). Results showed a clear combinatory effect of person perception cues on joint attention (JA). Specifically, our results showed that age and sex cues interacted with social status information depending on the modality through which it was conveyed. In Experiment 3, we further investigated our results by testing whether the identities used in Experiments 1 and 2 triggered different pro-social behaviour. The results of Experiment 3 showed that the identities resulting as more distracting in Experiments 1 and 2 were also perceived as more in need and prompt helping behaviour. Taken together, our evidence shows a combinatorial effect of age, sex, and social status in modulating the gaze following behaviour, highlighting a complex and dynamic interplay between person categorization and joint attention.
•Joint attention is modulated by person categorization cues.•Age and sex exert a combinatory effect on gaze following behaviour.•Social status interacts with age and sex depending on the way it is conveyed.•Identities perceived as more in need elicited larger gaze following behaviour.
Gaze-following behaviour is considered crucial for social interactions which are influenced by social similarity. We investigated whether the degree of similarity, as indicated by the perceived age ...of another person, can modulate gaze following. Participants of three different age-groups (18-25; 35-45; over 65) performed an eye movement (a saccade) towards an instructed target while ignoring the gaze-shift of distracters of different age-ranges (6-10; 18-25; 35-45; over 70). The results show that gaze following was modulated by the distracter face age only for young adults. Particularly, the over 70 year-old distracters exerted the least interference effect. The distracters of a similar age-range as the young adults (18-25; 35-45) had the most effect, indicating a blurred own-age bias (OAB) only for the young age group. These findings suggest that face age can modulate gaze following, but this modulation could be due to factors other than just OAB (e.g., familiarity).
Can we resist another person's gaze? Marino, Barbara F M; Mirabella, Giovanni; Actis-Grosso, Rossana ...
Frontiers in behavioral neuroscience,
09/2015, Letnik:
9
Journal Article
Recenzirano
Odprti dostop
Adaptive adjustments of strategies are needed to optimize behavior in a dynamic and uncertain world. A key function in implementing flexible behavior and exerting self-control is represented by the ...ability to stop the execution of an action when it is no longer appropriate for the environmental requests. Importantly, stimuli in our environment are not equally relevant and some are more valuable than others. One example is the gaze of other people, which is known to convey important social information about their direction of attention and their emotional and mental states. Indeed, gaze direction has a significant impact on the execution of voluntary saccades of an observer since it is capable of inducing in the observer an automatic gaze-following behavior: a phenomenon named social or joint attention. Nevertheless, people can exert volitional inhibitory control on saccadic eye movements during their planning. Little is known about the interaction between gaze direction signals and volitional inhibition of saccades. To fill this gap, we administered a countermanding task to 15 healthy participants in which they were asked to observe the eye region of a face with the eyes shut appearing at central fixation. In one condition, participants were required to suppress a saccade, that was previously instructed by a gaze shift toward one of two peripheral targets, when the eyes were suddenly shut down (social condition, SC). In a second condition, participants were asked to inhibit a saccade, that was previously instructed by a change in color of one of the two same targets, when a change of color of a central picture occurred (non-social condition, N-SC). We found that inhibitory control was more impaired in the SC, suggesting that actions initiated and stopped by social cues conveyed by the eyes are more difficult to withhold. This is probably due to the social value intrinsically linked to these cues and the many uses we make of them.
Head and gaze directions are used during social interactions as essential cues to infer where someone attends. When head and gaze are oriented toward opposite directions, we need to extract socially ...meaningful information despite stimulus conflict. Recently, a cognitive and neural mechanism for filtering-out conflicting stimuli has been identified while performing non-social attention tasks. This mechanism is engaged proactively when conflict is anticipated in a high proportion of trials and reactively when conflict occurs infrequently. Here, we investigated whether a similar mechanism is at play for limiting distraction from conflicting social cues during gaze or head direction discrimination tasks in contexts with different probabilities of conflict. Results showed that, for the gaze direction task only (Experiment 1), inverse efficiency (IE) scores for distractor-absent trials (i.e., faces with averted gaze and centrally oriented head) were larger (indicating worse performance) when these trials were intermixed with congruent/incongruent distractor-present trials (i.e., faces with averted gaze and tilted head in the same/opposite direction) relative to when the same distractor-absent trials were shown in isolation. Moreover, on distractor-present trials, IE scores for congruent (vs. incongruent) head-gaze pairs in blocks with rare conflict were larger than in blocks with frequent conflict, suggesting that adaptation to conflict was more efficient than adaptation to infrequent events. However, when the task required discrimination of head orientation while ignoring gaze direction, performance was not impacted by both block-level and current trial congruency (Experiment 2), unless the cognitive load of the task was increased by adding a concurrent task (Experiment 3). Overall, our study demonstrates that during attention to social cues proactive cognitive control mechanisms are modulated by the expectation of conflicting stimulus information at both the block- and trial-sequence level, and by the type of task and cognitive load. This helps to clarify the inherent differences in the distracting potential of head and gaze cues during speeded social attention tasks.
According to embodied cognition, language processing relies on the same neural structures involved when individuals experience the content of language material. If so, processing nouns expressing a ...motor content presented in a second language should modulate the motor system as if presented in the mother tongue. We tested this hypothesis using a go-no go paradigm. Stimuli included English nouns and pictures depicting either graspable or non-graspable objects. Pseudo-words and scrambled images served as controls. Italian participants, fluent speakers of English as a second language, had to respond when the stimulus was sensitive and refrain from responding when it was not. As foreseen by embodiment, motor responses were selectively modulated by graspable items (images or nouns) as in a previous experiment where nouns in the same category were presented in the native language.
Embodied approaches to language understanding hold that comprehension of linguistic material entails a situated simulation of the situation described. Some recent studies have shown that implicit, ...explicit, and relational properties of objects implied in a sentence are part of this simulation. However, the issue concerning the extent to which language sensorimotor specificity expressed by linguistic constituents of a sentence, contributes to situating the simulation process has not yet been adequately addressed. To fill this gap, we combined a concrete action verb with a noun denoting a graspable or non-graspable object, to form a sensible or non-sensible sentence. Verbs could express a specific action with low degrees of freedom (DoF) or an action with high DoF. Participants were asked to respond indicating whether the sentences were sensible or not. We found that simulation was active in understanding both sensible and non-sensible sentences. Moreover, the simulation was more situated with sentences containing a verb referring to an action with low DoF. Language sensorimotor specificity expressed by the noun, played a role in situating the simulation, only when the noun was preceded by a verb denoting an action with high DoF in sensible sentences. The simulation process in understanding non-sensible sentences evoked both the representations related to the verb and to the noun, these remaining separated rather than being integrated as in sensible sentences. Overall our findings are in keeping with embodied approaches to language understanding and suggest that the language sensorimotor specificity of sentence constituents affects the extent to which the simulation is situated.
It is well known that the observation of graspable objects recruits the same motor representations involved in their actual manipulation. Recent evidence suggests that the presentation of nouns ...referring to graspable objects may exert similar effects. So far, however, it is not clear to what extent the modulation of the motor system during object observation overlaps with that related to noun processing. To address this issue, 2 behavioral experiments were carried out using a go-no go paradigm. Healthy participants were presented with photos and nouns of graspable and non-graspable natural objects. Also scrambled images and pseudowords obtained from the original stimuli were used. At a go-signal onset (150 ms after stimulus presentation) participants had to press a key when the stimulus referred to a real object, using their right (Experiment 1) or left (Experiment 2) hand, and refrain from responding when a scrambled image or a pseudoword was presented. Slower responses were found for both photos and nouns of graspable objects as compared to non-graspable objects, independent of the responding hand. These findings suggest that processing seen graspable objects and written nouns referring to graspable objects similarly modulates the motor system.