Hearing loss is a widespread condition that is linked to declines in quality of life and mental health. Hearing aids remain the treatment of choice, but, unfortunately, even state-of-the-art devices ...provide only limited benefit for the perception of speech in noisy environments. While traditionally viewed primarily as a loss of sensitivity, hearing loss is also known to cause complex distortions of sound-evoked neural activity that cannot be corrected by amplification alone. This Opinion article describes the effects of hearing loss on neural activity to illustrate the reasons why current hearing aids are insufficient and to motivate the use of new technologies to explore directions for improving the next generation of devices.
Hearing loss is now widely recognized as a major cause of disability and a risk factor for dementia, but most cases still go untreated. Uptake of hearing aids is poor, partly because they provide little benefit in typical social settings.
The effects of hearing loss on neural activity in the ear and brain are complex and profound. Current hearing aids can restore overall activity levels to normal, but are ultimately insufficient because they fail to compensate for distortions in the specific patterns of neural activity that encode acoustic information, particularly in the context of speech.
Recent advances in electrophysiology and machine learning, together with a changing regulatory landscape and increasing social acceptance of wearable devices, should improve the performance and uptake of hearing aids in the near future.
Sensory function is mediated by interactions between external stimuli and intrinsic cortical dynamics that are evident in the modulation of evoked responses by cortical state. A number of recent ...studies across different modalities have demonstrated that the patterns of activity in neuronal populations can vary strongly between synchronized and desynchronized cortical states, i.e., in the presence or absence of intrinsically generated up and down states. Here we investigated the impact of cortical state on the population coding of tones and speech in the primary auditory cortex (A1) of gerbils, and found that responses were qualitatively different in synchronized and desynchronized cortical states. Activity in synchronized A1 was only weakly modulated by sensory input, and the spike patterns evoked by tones and speech were unreliable and constrained to a small range of patterns. In contrast, responses to tones and speech in desynchronized A1 were temporally precise and reliable across trials, and different speech tokens evoked diverse spike patterns with extremely weak noise correlations, allowing responses to be decoded with nearly perfect accuracy. Restricting the analysis of synchronized A1 to activity within up states yielded similar results, suggesting that up states are not equivalent to brief periods of desynchronization. These findings demonstrate that the representational capacity of A1 depends strongly on cortical state, and suggest that cortical state should be considered as an explicit variable in all studies of sensory processing.
Neuronal responses during sensory processing are influenced by both the organization of intracortical connections and the statistical features of sensory stimuli. How these intrinsic and extrinsic ...factors govern the activity of excitatory and inhibitory populations is unclear. Using two-photon calcium imaging in vivo and intracellular recordings in vitro, we investigated the dependencies between synaptic connectivity, feature selectivity and network activity in pyramidal cells and fast-spiking parvalbumin-expressing (PV) interneurons in mouse visual cortex. In pyramidal cell populations, patterns of neuronal correlations were largely stimulus-dependent, indicating that their responses were not strongly dominated by functionally biased recurrent connectivity. By contrast, visual stimulation only weakly modified co-activation patterns of fast-spiking PV cells, consistent with the observation that these broadly tuned interneurons received very dense and strong synaptic input from nearby pyramidal cells with diverse feature selectivities. Therefore, feedforward and recurrent network influences determine the activity of excitatory and inhibitory ensembles in fundamentally different ways.
Cortical networks exhibit intrinsic dynamics that drive coordinated, large-scale fluctuations across neuronal populations and create noise correlations that impact sensory coding. To investigate the ...network-level mechanisms that underlie these dynamics, we developed novel computational techniques to fit a deterministic spiking network model directly to multi-neuron recordings from different rodent species, sensory modalities, and behavioral states. The model generated correlated variability without external noise and accurately reproduced the diverse activity patterns in our recordings. Analysis of the model parameters suggested that differences in noise correlations across recordings were due primarily to differences in the strength of feedback inhibition. Further analysis of our recordings confirmed that putative inhibitory neurons were indeed more active during desynchronized cortical states with weak noise correlations. Our results demonstrate that network models with intrinsically-generated variability can accurately reproduce the activity patterns observed in multi-neuron recordings and suggest that inhibition modulates the interactions between intrinsic dynamics and sensory inputs to control the strength of noise correlations.
What constitutes a healthy diet? Mainstream media and advertisers would like you to think that the answer to this question is complicated and controversial. But science, fortunately, tells us ...otherwise. A Conversation about Healthy Eating brings together all the relevant science about healthy eating in one place, and it’s exactly that – a conversation; an informal discussion between a scientist and a friend about their eating habits, keeping the science firmly rooted in everyday life. The conversation moves from topics such as metabolism and digestion to gut bacteria, hormones, neuroscience and the immune system. All of these concepts are explained in accessible terms to help you understand the roles they play in maintaining a healthy diet. The conversation leads to the conclusion that staying lean and healthy simply requires avoiding the overconsumption of processed foods. While this is, of course, easier said than done, science also provides clear recommendations for how you can adapt your environment and lifestyle to make it possible. Rather than simply presenting you with the principles of healthy eating, this book will help you to develop a comprehensive understanding of the science behind the principles, including the evolutionary facts that affect the way we eat today. This understanding will allow you to ignore the noise in the media and to move forward with a healthy lifestyle that work for you.
In this study, we investigate the ability of the mammalian auditory pathway to adapt its strategy for temporal processing under natural stimulus conditions. We derive temporal receptive fields from ...the responses of neurons in the inferior colliculus to vocalization stimuli with and without additional ambient noise. We find that the onset of ambient noise evokes a change in receptive field dynamics that corresponds to a change from bandpass to lowpass temporal filtering. We show that these changes occur within a few hundred milliseconds of the onset of the noise and are evident across a range of overall stimulus intensities. Using a simple model, we illustrate how these changes in temporal processing exploit differences in the statistical properties of vocalizations and ambient noises to increase the information in the neural response in a manner consistent with the principles of efficient coding.
The timing of action potentials relative to sensory stimuli can be precise down to milliseconds in the visual system, even though the relevant timescales of natural vision are much slower. The ...existence of such precision contributes to a fundamental debate over the basis of the neural code and, specifically, what timescales are important for neural computation. Using recordings in the lateral geniculate nucleus, here we demonstrate that the relevant timescale of neuronal spike trains depends on the frequency content of the visual stimulus, and that 'relative', not absolute, precision is maintained both during spatially uniform white-noise visual stimuli and naturalistic movies. Using information-theoretic techniques, we demonstrate a clear role of relative precision, and show that the experimentally observed temporal structure in the neuronal response is necessary to represent accurately the more slowly changing visual world. By establishing a functional role of precision, we link visual neuron function on slow timescales to temporal structure in the response at faster timescales, and uncover a straightforward purpose of fine-timescale features of neuronal spike trains.
Listeners with hearing loss often struggle to understand speech in noise, even with a hearing aid. To better understand the auditory processing deficits that underlie this problem, we made ...large-scale brain recordings from gerbils, a common animal model for human hearing, while presenting a large database of speech and noise sounds. We first used manifold learning to identify the neural subspace in which speech is encoded and found that it is low-dimensional and that the dynamics within it are profoundly distorted by hearing loss. We then trained a deep neural network (DNN) to replicate the neural coding of speech with and without hearing loss and analyzed the underlying network dynamics. We found that hearing loss primarily impacts spectral processing, creating nonlinear distortions in cross-frequency interactions that result in a hypersensitivity to background noise that persists even after amplification with a hearing aid. Our results identify a new focus for efforts to design improved hearing aids and demonstrate the power of DNNs as a tool for the study of central brain structures.
Interaural time differences (ITDs) are the primary cue for the localization of low-frequency sound sources in the azimuthal plane. For decades, it was assumed that the coding of ITDs in the mammalian ...brain was similar to that in the avian brain, where information is sparsely distributed across individual neurons, but recent studies have suggested otherwise. In this study, we characterized the representation of ITDs in adult male and female gerbils. First, we performed behavioral experiments to determine the acuity with which gerbils can use ITDs to localize sounds. Next, we used different decoders to infer ITDs from the activity of a population of neurons in central nucleus of the inferior colliculus. These results show that ITDs are not represented in a distributed manner, but rather in the summed activity of the entire population. To contrast these results with those from a population where the representation of ITDs is known to be sparsely distributed, we performed the same analysis on activity from the external nucleus of the inferior colliculus of adult male and female barn owls. Together, our results support the idea that, unlike the avian brain, the mammalian brain represents ITDs in the overall activity of a homogenous population of neurons within each hemisphere.
The ability to spontaneously feel a beat in music is a phenomenon widely believed to be unique to humans. Though beat perception involves the coordinated engagement of sensory, motor and cognitive ...processes in humans, the contribution of low-level auditory processing to the activation of these networks in a beat-specific manner is poorly understood. Here, we present evidence from a rodent model that midbrain preprocessing of sounds may already be shaping where the beat is ultimately felt. For the tested set of musical rhythms, on-beat sounds on average evoked higher firing rates than off-beat sounds, and this difference was a defining feature of the set of beat interpretations most commonly perceived by human listeners over others. Basic firing rate adaptation provided a sufficient explanation for these results. Our findings suggest that midbrain adaptation, by encoding the temporal context of sounds, creates points of neural emphasis that may influence the perceptual emergence of a beat.