Vision not only provides us with detailed knowledge of the world beyond our bodies, but it also guides our actions with respect to objects and events in that world. The computations required for ...vision-for-perception are quite different from those required for vision-for-action. The former uses relational metrics and scene-based frames of reference while the latter uses absolute metrics and effector-based frames of reference. These competing demands on vision have shaped the organization of the visual pathways in the primate brain, particularly within the visual areas of the cerebral cortex. The ventral ‘perceptual’ stream, projecting from early visual areas to inferior temporal cortex, helps to construct the rich and detailed visual representations of the world that allow us to identify objects and events, attach meaning and significance to them and establish their causal relations. By contrast, the dorsal ‘action’ stream, projecting from early visual areas to the posterior parietal cortex, plays a critical role in the real-time control of action, transforming information about the location and disposition of goal objects into the coordinate frames of the effectors being used to perform the action. The idea of two visual systems in a single brain might seem initially counterintuitive. Our visual experience of the world is so compelling that it is hard to believe that some other quite independent visual signal—one that we are unaware of—is guiding our movements. But evidence from a broad range of studies from neuropsychology to neuroimaging has shown that the visual signals that give us our experience of objects and events in the world are not the same ones that control our actions.
Traditionally, research on vision focused on its role in perception and our cognitive life. Except for the study of eye movements, which have been regarded as an information-seeking adjunct to visual ...perception, little attention was paid to the way in which vision is used to control our actions, particularly the movements of our hands and limbs. Over the last 25 years all of that has changed. Researchers are now actively investigating the way in which vision is used to control a broad range of complex goal-directed action – and are exploring the neural substrates of that control. A new model of the functional organization of the visual pathways in the primate cerebral cortex has emerged, one that posits a division of labor between vision-for-action (the dorsal stream) and vision-for-perception (the ventral stream). In this review, I examine some of the seminal work on the role of vision in the control of manual prehension and on the visual cues that play a critical role in this important human skill. I then review the key evidence for the perception–action model, particularly with reference to the role of the dorsal stream in the control of manual prehension, touching on recent work that both reinforces and challenges this account of the organization of the visual system.
A small number of blind people are adept at echolocating silent objects simply by producing mouth clicks and listening to the returning echoes. Yet the neural architecture underlying this type of ...aid-free human echolocation has not been investigated. To tackle this question, we recruited echolocation experts, one early- and one late-blind, and measured functional brain activity in each of them while they listened to their own echolocation sounds.
When we compared brain activity for sounds that contained both clicks and the returning echoes with brain activity for control sounds that did not contain the echoes, but were otherwise acoustically matched, we found activity in calcarine cortex in both individuals. Importantly, for the same comparison, we did not observe a difference in activity in auditory cortex. In the early-blind, but not the late-blind participant, we also found that the calcarine activity was greater for echoes reflected from surfaces located in contralateral space. Finally, in both individuals, we found activation in middle temporal and nearby cortical regions when they listened to echoes reflected from moving targets.
These findings suggest that processing of click-echoes recruits brain regions typically devoted to vision rather than audition in both early and late blind echolocation experts.
Celotno besedilo
Dostopno za:
DOBA, IZUM, KILJ, NUK, PILJ, PNG, SAZU, SIK, UILJ, UKNU, UL, UM, UPUK
Visual illusions have provided compelling evidence for a dissociation between perception and action. For example, when two different-sized objects are placed on opposite ends of the Ponzo illusion, ...people erroneously perceive the physically smaller object to be bigger than the physically larger one, but when they pick up the objects, their grip aperture reflects the real difference in size between the objects. This and similar findings have been demonstrated almost entirely for the right hand in right handers. The scarce research that has examined right and left-handed subjects in this context, has typically used only small samples. Here, we extended this research with a larger sample size (more than 50 in each group) in a version of the Ponzo illusion that allowed us to disentangle the effects of real and illusory size on action and perception in much more powerful way. We also collected a wide range of kinematic measures to assess possible differences in visuomotor control in left and right handers. The results showed that the dissociation between perception and action persisted for both hands in right handers, but only for the right hand in left handers. The left hand of left handers was sensitive to the illusion. Left handers also showed more variable and slower movements, as well as larger safety margins in both hands. These findings suggest that grasping in left handers may require more cognitive supervision, which could lead to greater sensitivity to visual context , particularly with their dominant left hand.
•In right handers, visual illusions affect perceptual judgements of size but not grasping.•We tested this dissociation in left handers.•Surprisingly, the left but not the right hand of left handers was affected by the illusion.•Left handers showed more variable grasping and were more cautious in later stages of the movement.•The relations between action and perception are modulated by handedness.
Our expectations of an object's heaviness not only drive our fingertip forces, but also our perception of heaviness. This effect is highlighted by the classic size-weight illusion (SWI), where ...different-sized objects of identical mass feel different weights. Here, we examined whether these expectations are sufficient to induce the SWI in a single wooden cube when lifted without visual feedback, by varying the size of the object seen prior to the lift.
Participants, who believed that they were lifting the same object that they had just seen, reported that the weight of the single, standard-sized cube that they lifted on every trial varied as a function of the size of object they had just seen. Seeing the small object before the lift made the cube feel heavier than it did after seeing the large object. These expectations also affected the fingertip forces that were used to lift the object when vision was not permitted. The expectation-driven errors made in early trials were not corrected with repeated lifting, and participants failed to adapt their grip and load forces from the expected weight to the object's actual mass in the same way that they could when lifting with vision.
Vision appears to be crucial for the detection, and subsequent correction, of the ostensibly non-visual grip and load force errors that are a common feature of this type of object interaction. Expectations of heaviness are not only powerful enough to alter the perception of a single object's weight, but also continually drive the forces we use to lift the object when vision is unavailable.
Celotno besedilo
Dostopno za:
DOBA, IZUM, KILJ, NUK, PILJ, PNG, SAZU, SIK, UILJ, UKNU, UL, UM, UPUK
Echolocation in humans: an overview Thaler, Lore; Goodale, Melvyn A.
Wiley interdisciplinary reviews. Cognitive science,
November/December 2016, Letnik:
7, Številka:
6
Journal Article
Recenzirano
Odprti dostop
Bats and dolphins are known for their ability to use echolocation. They emit bursts of sounds and listen to the echoes that bounce back to detect the objects in their environment. What is not as ...well‐known is that some blind people have learned to do the same thing, making mouth clicks, for example, and using the returning echoes from those clicks to sense obstacles and objects of interest in their surroundings. The current review explores some of the research that has examined human echolocation and the changes that have been observed in the brains of echolocation experts. We also discuss potential applications and assistive technology based on echolocation. Blind echolocation experts can sense small differences in the location of objects, differentiate between objects of various sizes and shapes, and even between objects made of different materials, just by listening to the reflected echoes from mouth clicks. It is clear that echolocation may enable some blind people to do things that are otherwise thought to be impossible without vision, potentially providing them with a high degree of independence in their daily lives and demonstrating that echolocation can serve as an effective mobility strategy in the blind. Neuroimaging has shown that the processing of echoes activates brain regions in blind echolocators that would normally support vision in the sighted brain, and that the patterns of these activations are modulated by the information carried by the echoes. This work is shedding new light on just how plastic the human brain is. WIREs Cogn Sci 2016, 7:382‐393. doi: 10.1002/wcs.1408
This article is categorized under:
Psychology > Brain Function and Dysfunction
Psychology > Perception and Psychophysics
Neuroscience > Plasticity
The study of echolocation in blind humans is a vibrant area of research in psychology and the neurosciences. It is not only a fascinating subject in its own right, but provides a window into neuroplasticity, affording researchers a fresh paradigm for probing how the brain deals with novel sensory information.
Previous research has shown an unintuitive effect of facial expression on perceived age: smiling faces are perceived as older compared to neutral faces of the same people. The aging effect of smiling ...(AES), which is thought to result from the presence of smile-related wrinkles around the eyes, contradicts the common belief that smiling faces should be perceived as younger, not older. Previous research, however, has focused on faces of young adults, where the absence of inherent, age-related wrinkles and other age signs is offset by the weight of the smile-related wrinkles. In a series of experiments, we tested whether the AES extends to male and female faces in older age groups. We replicated the AES in young adults (20-39) and showed that it disappeared in older adults (60-79) of both genders. For photos of middle-aged adults (40-59), however, AES was found only for male, but not for female faces, who showed fewer and less prominent smile-related wrinkles. The results suggest that a person's apparent age is perceived in a holistic manner in which age-related cues in the region of the eyes are weighted against age cues in other regions of the face.
Animate and inanimate objects elicit distinct response patterns in the human ventral temporal cortex (VTC), but the exact features driving this distinction are still poorly understood. One prominent ...feature that distinguishes typical animals from inanimate objects and that could potentially explain the animate-inanimate distinction in the VTC is the presence of a face. In the current fMRI study, we investigated this possibility by creating a stimulus set that included animals with faces, faceless animals, and inanimate objects, carefully matched in order to minimize other visual differences. We used both searchlight-based and ROI-based representational similarity analysis (RSA) to test whether the presence of a face explains the animate-inanimate distinction in the VTC. The searchlight analysis revealed that when animals with faces were removed from the analysis, the animate-inanimate distinction almost disappeared. The ROI-based RSA revealed a similar pattern of results, but also showed that, even in the absence of faces, information about agency (a combination of animal's ability to move and think) is present in parts of the VTC that are sensitive to animacy. Together, these analyses showed that animals with faces do elicit a stronger animate/inanimate response in the VTC, but that faces are not necessary in order to observe high-level animacy information (e.g., agency) in parts of the VTC. A possible explanation could be that this animacy-related activity is driven not by faces per se, or the visual features of faces, but by other factors that correlate with face presence, such as the capacity for self-movement and thought. In short, the VTC might treat the face as a proxy for agency, a ubiquitous feature of familiar animals.
•Animate and inanimate objects elicited distinct fMRI activity patterns in the ventral temporal cortex (VTC).•The animate/inanimate distinction was more pronounced for animals with faces compared to visually-matched faceless animals.•High-level animacy-related information such as agency was reflected in parts of the VTC even in the absence of faces.•Animate/inanimate distinction may be driven not by faces per se, but by other features that correlate with face presence.•Faces strongly influence activity in the VTC, making it important to carefully control for this feature in future studies.
Skilled manipulation requires the ability to predict the weights of viewed objects based on learned associations linking object weight to object visual appearance 1–5. However, the neural mechanisms ...involved in extracting weight information from viewed object properties are unknown. Given that ventral visual pathway areas represent a wide variety of object features 6–11, one intriguing but as yet untested possibility is that these areas also represent object weight, a nonvisual motor-relevant object property. Here, using event-related fMRI and pattern classification techniques, we tested the novel hypothesis that object-sensitive regions in occipitotemporal cortex (OTC), in addition to traditional motor-related brain areas, represent object weight when preparing to lift that object. In two studies, the same participants prepared and then executed lifting actions with objects of varying weight. In the first study, we show that when lifting visually identical objects, where predicted weight is based solely on sensorimotor memory, weight is represented in object-sensitive OTC. In the second study, we show that when object weight is associated with a particular surface texture, that texture-sensitive OTC areas also come to represent object weight. Notably, these texture-sensitive areas failed to carry information about weight in the first study, when object surface properties did not specify weight. Our results indicate that the integration of visual and motor-relevant object information occurs at the level of single OTC areas and provide evidence that the ventral visual pathway is actively and flexibly engaged in processing object weight, an object property critical for action planning and control.
•Object weight can be decoded from somatomotor and occipitotemporal cortex (OTC)•Object-sensitive OTC encodes weight derived from sensorimotor memory and texture•Texture-sensitive OTC only encodes weight when predicted from object texture•Findings suggest that OTC encodes object features critical for skilled manipulation
Gallivan et al. show that object-sensitive occipitotemporal cortex (OTC) represents object weight, a nonvisual motor-relevant object property, during action planning. They further show that object weight information comes to be flexibly represented in texture-sensitive OTC once an object’s weight and a particular surface texture become reliably linked.