Despite claims in the popular press, experiments investigating whether female are more efficient than male observers at processing expression of emotions produced inconsistent findings. In the ...present study, participants were asked to categorize fear and disgust expressions displayed auditorily, visually, or audio-visually. Results revealed an advantage of women in all the conditions of stimulus presentation. We also observed more nonlinear probabilistic summation in the bimodal conditions in female than male observers, indicating greater neural integration of different sensory-emotional informations. These findings indicate robust differences between genders in the multisensory perception of emotion expression.
Highlights • Cochlear implant (CI) users who performed well on the speech recognition task have similar auditory evoked potentials to those of the normal hearing participants. • CI users who ...performed more poorly on the speech recognition task present a significantly abnormal auditory evoked potentials compared to the better performers and the normal hearing participants. • The mismatch negativity auditory evoked potential can be used to assess speech recognition in CI users.
Recent work suggests that once the auditory cortex of deaf persons has been reorganized by cross-modal plasticity, it can no longer respond to signals from a cochlear implant (CI) installed ...subsequently. To further examine this issue, we compared the evoked potentials involved in the processing of visual stimuli between CI users and hearing controls. The stimuli were concentric circles replaced by a different overlapping shape, inducing a shape transformation, known to activate the ventral visual pathway in human adults. All CI users had their device implanted for >1 year, but obtained different levels of auditory performance following training to establish language comprehension. Seven of the 13 patients showed good capacities for speech recognition with the CI (good performers) while the six others demonstrated poor speech recognition abilities (poor performers). The evoked potentials of all patients showed larger amplitudes, with different distributions of scalp activations between the two groups. The poor performers exhibited broader, anteriorly distributed, high P2 amplitudes over the cortex whereas the good performers showed significantly higher P2 amplitudes over visual occipital areas. These results suggest the existence of a profound cross-modal reorganization in the poor performers and an intramodal reorganization in the good performers. We interpret these data on the basis of enhanced audiovisual coupling as the key to a long-term functional improvement in speech discrimination in CI users.
Although the topic of sensory integration has raised increasing interest, the differing behavioral outcome of combining unisensory versus multisensory inputs has surprisingly only been scarcely ...investigated. In the present experiment, observers were required to respond as fast as possible to (1) lateralized visual or tactile targets presented alone, (2) double stimulation within the same modality or (3) double stimulation across modalities. Each combination was either delivered within the same hemispace (spatially aligned) or in different hemispaces (spatially misaligned). Results show that the redundancy gains (RG) obtained from the cross-modal conditions were far greater than those obtained from combinations of two visual or two tactile targets. Consistently, we observed that the reaction time distributions of cross-modal targets, but not those of within-modal targets, surpass the predicted reaction time distribution based on the summed probability distributions of each constituent stimulus presented alone. Moreover, we found that the spatial alignment of the targets did not influence the RG obtained in cross-modal conditions, whereas within-modal stimuli produced a greater RG when the targets where delivered in separate hemispaces. These results suggest that within-modal and cross-modal integration are not only distinguishable by the amount of facilitation they produce, but also by the spatial configuration under which this facilitation occurs. Our study strongly supports the notion that estimates of the same event that are more independent produce enhanced integrative gains.
Do blind persons develop capacities of their remaining senses that exceed those of sighted individuals? Besides anecdotal suggestions, two views based on experimental studies have been advanced. The ...first proposes that blind individuals should be severely impaired, given that vision is essential to develop spatial concepts. The second suggests that compensation occurs through the remaining senses, allowing them to develop an accurate concept of space. Here we investigate how an ecologically critical function, namely three-dimensional spatial mapping, is carried out by early-blind individuals with or without residual vision. Subjects were tested under monaural and binaural listening conditions. We find that early-blind subjects can map the auditory environment with equal or better accuracy than sighted subjects. Furthermore, unlike sighted subjects, they can correctly localize sounds monaurally. Surprisingly, blind individuals with residual peripheral vision localized sounds less precisely than sighted or totally blind subjects, confirming that compensation varies according to the aetiology and extent of blindness. Our results resolve a long-standing controversy in that they provide behavioural evidence that totally blind individuals have better auditory ability than sighted subjects, enabling them to compensate for their loss of vision.
In the absence of visual input, the question arises as to how complex spatial abilities develop and how the brain adapts to the absence of this modality. We explored navigational skills in both early ...and late blind individuals and structural differences in the hippocampus, a brain region well known to be involved in spatial processing. Thirty-eight participants were divided into three groups: early blind individuals (n = 12; loss of vision before 5 years of age; mean age 33.8 years), late blind individuals (n = 7; loss of vision after 14 years of age; mean age 39.9 years) and 19 sighted, blindfolded matched controls. Subjects undertook route learning and pointing tasks in a maze and a spatial layout task. Anatomical data was collected by MRI. Remarkably, we not only show that blind individuals possess superior navigational skills than controls on the route learning task, but we also show for the first time a significant volume increase of the hippocampus in blind individuals F(1,36) = 6.314; P ≤ 0.01; blind: mean = 4237.00 mm3, SE = 107.53; sighted: mean = 3905.74 mm3, SE = 76.27, irrespective of whether their blindness was congenital or acquired. Overall, our results shed new light not only on the construction of spatial concepts and the non-necessity of vision for its proper development, but also on the hippocampal plasticity observed in adult blind individuals who have to navigate in this space.
We present a multichip structure assembled with a medical-grade stainless-steel microelectrode array intended for neural recordings from multiple channels. The design features a mixed-signal ...integrated circuit (IC) that handles conditioning, digitization, and time-division multiplexing of neural signals, and a digital IC that provides control, bandwidth reduction, and data communications for telemetry toward a remote host. Bandwidth reduction is achieved through action potential detection and complete capture of waveforms by means of onchip data buffering. The adopted architecture uses high parallelism and low-power building blocks for safety and long-term implantability. Both ICs are fabricated in a CMOS 0.18-mum process and are subsequently mounted on the base of the microelectrode array. The chips are stacked according to a vertical integration approach for better compactness. The presented device integrates 16 channels, and is scalable to hundreds of recording channels. Its performance was validated on a testbench with synthetic neural signals. The proposed interface presents a power consumption of 138 muW per channel, a size of 2.30 mm 2 , and achieves a bandwidth reduction factor of up to 48 with typical recordings.
Highlights • Used fMRI to explore processing of two visual motion stimuli in the deaf. • Stimuli were designed to differentially recruit the two visual streams. • Additional recruitment of auditory, ...visual and multimodal areas in the deaf. • Such recruitment occurred for both motion and static stimuli. • Suggests an undifferentiated crossmodal recruitment following deprivation.
Highlights • Aging increased the number of visual neurons responding sluggishly to looming CCS gratings and SAM white noise. • In aged rats, the auditory stimulation did not produce any significant ...enhancement on the visual neuronal responses. • Aging diminished drastically the capacity to integrate audiovisual looming signals.
Perception of trigeminal mixtures Filiou, Renée-Pier; Lepore, Franco; Bryant, Bruce ...
Chemical senses
40, Številka:
1
Journal Article
Recenzirano
Odprti dostop
The trigeminal system is a chemical sense allowing for the perception of chemosensory information in our environment. However, contrary to smell and taste, we lack a thorough understanding of the ...trigeminal processing of mixtures. We, therefore, investigated trigeminal perception using mixtures of 3 relatively receptor-specific agonists together with one control odor in different proportions to determine basic perceptual dimensions of trigeminal perception. We found that 4 main dimensions were linked to trigeminal perception: sensations of intensity, warmth, coldness, and pain. We subsequently investigated perception of binary mixtures of trigeminal stimuli by means of these 4 perceptual dimensions using different concentrations of a cooling stimulus (eucalyptol) mixed with a stimulus that evokes warmth perception (cinnamaldehyde). To determine if sensory interactions are mainly of central or peripheral origin, we presented stimuli in a physical "mixture" or as a "combination" presented separately to individual nostrils. Results showed that mixtures generally yielded higher ratings than combinations on the trigeminal dimensions "intensity," "warm," and "painful," whereas combinations yielded higher ratings than mixtures on the trigeminal dimension "cold." These results suggest dimension-specific interactions in the perception of trigeminal mixtures, which may be explained by particular interactions that may take place on peripheral or central levels.