Sound localization relies on minute differences in the timing and intensity of sound arriving at both ears. Neurons of the lateral superior olive (LSO) in the brainstem process these interaural ...disparities by precisely detecting excitatory and inhibitory synaptic inputs. Aging generally induces selective loss of inhibitory synaptic transmission along the entire auditory pathways, including the reduction of inhibitory afferents to LSO. Electrophysiological recordings in animals, however, reported only minor functional changes in aged LSO. The perplexing discrepancy between anatomical and physiological observations suggests a role for activity-dependent plasticity that would help neurons retain their binaural tuning function despite loss of inhibitory inputs. To explore this hypothesis, we use a computational model of LSO to investigate mechanisms underlying the observed functional robustness against age-related loss of inhibitory inputs. The LSO model is an integrate-and-fire type enhanced with a small amount of low-voltage activated potassium conductance and driven with (in)homogeneous Poissonian inputs. Without synaptic input loss, model spike rates varied smoothly with interaural time and level differences, replicating empirical tuning properties of LSO. By reducing the number of inhibitory afferents to mimic age-related loss of inhibition, overall spike rates increased, which negatively impacted binaural tuning performance, measured as modulation depth and neuronal discriminability. To simulate a recovery process compensating for the loss of inhibitory fibers, the strength of remaining inhibitory inputs was increased. By this modification, effects of inhibition loss on binaural tuning were considerably weakened, leading to an improvement of functional performance. These neuron-level observations were further confirmed by population modeling, in which binaural tuning properties of multiple LSO neurons were varied according to empirical measurements. These results demonstrate the plausibility that homeostatic plasticity could effectively counteract known age-dependent loss of inhibitory fibers in LSO and suggest that behavioral degradation of sound localization might originate from changes occurring more centrally.
In computational biology, modeling is a fundamental tool for formulating, analyzing and predicting complex phenomena. Most neuron models, however, are designed to reproduce certain small sets of ...empirical data. Hence their outcome is usually not compatible or comparable with other models or datasets, making it unclear how widely applicable such models are. In this study, we investigate these aspects of modeling, namely credibility and generalizability, with a specific focus on auditory neurons involved in the localization of sound sources. The primary cues for binaural sound localization are comprised of interaural time and level differences (ITD/ILD), which are the timing and intensity differences of the sound waves arriving at the two ears. The lateral superior olive (LSO) in the auditory brainstem is one of the locations where such acoustic information is first computed. An LSO neuron receives temporally structured excitatory and inhibitory synaptic inputs that are driven by ipsi- and contralateral sound stimuli, respectively, and changes its spike rate according to binaural acoustic differences. Here we examine seven contemporary models of LSO neurons with different levels of biophysical complexity, from predominantly functional ones ('shot-noise' models) to those with more detailed physiological components (variations of integrate-and-fire and Hodgkin-Huxley-type). These models, calibrated to reproduce known monaural and binaural characteristics of LSO, generate largely similar results to each other in simulating ITD and ILD coding. Our comparisons of physiological detail, computational efficiency, predictive performances, and further expandability of the models demonstrate (1) that the simplistic, functional LSO models are suitable for applications where low computational costs and mathematical transparency are needed, (2) that more complex models with detailed membrane potential dynamics are necessary for simulation studies where sub-neuronal nonlinear processes play important roles, and (3) that, for general purposes, intermediate models might be a reasonable compromise between simplicity and biological plausibility.
Many sensory neurons encode temporal information by detecting coincident arrivals of synaptic inputs. In the mammalian auditory brainstem, binaural neurons of the medial superior olive (MSO) are ...known to act as coincidence detectors, whereas in the lateral superior olive (LSO) roles of coincidence detection have remained unclear. LSO neurons receive excitatory and inhibitory inputs driven by ipsilateral and contralateral acoustic stimuli, respectively, and vary their output spike rates according to interaural level differences. In addition, LSO neurons are also sensitive to binaural phase differences of low-frequency tones and envelopes of amplitude-modulated (AM) sounds. Previous physiological recordings in vivo found considerable variations in monaural AM-tuning across neurons. To investigate the underlying mechanisms of the observed temporal tuning properties of LSO and their sources of variability, we used a simple coincidence counting model and examined how specific parameters of coincidence detection affect monaural and binaural AM coding. Spike rates and phase-locking of evoked excitatory and spontaneous inhibitory inputs had only minor effects on LSO output to monaural AM inputs. In contrast, the coincidence threshold of the model neuron affected both the overall spike rates and the half-peak positions of the AM-tuning curve, whereas the width of the coincidence window merely influenced the output spike rates. The duration of the refractory period affected only the low-frequency portion of the monaural AM-tuning curve. Unlike monaural AM coding, temporal factors, such as the coincidence window and the effective duration of inhibition, played a major role in determining the trough positions of simulated binaural phase-response curves. In addition, empirically-observed level-dependence of binaural phase-coding was reproduced in the framework of our minimalistic coincidence counting model. These modeling results suggest that coincidence detection of excitatory and inhibitory synaptic inputs is essential for LSO neurons to encode both monaural and binaural AM sounds.
To test a method to measure the efficacy of active middle ear implants when coupled to the round window.
Data previously published in Koka et al. ( Hear Res 2010;263:128-137) were used in this study. ...Simultaneous measurements of cochlear microphonics (CM) and stapes velocity in response to both acoustic stimulation (forward direction) and round window (RW) stimulation (reverse direction) with an active middle ear implant (AMEI) were made in seven ears in five chinchillas. For each stimulus frequency, the amplitude of the CM was measured separately as a function of intensity (dB SPL or dB mV). Equivalent vibrational input to the cochlea was determined by equating the acoustic and AMEI-generated CM amplitudes for a given intensity. In the condition of equivalent CM amplitude between acoustic and RW stimulation-generated output, we assume that the same vibrational input to the cochlea was present regardless of the route of stimulation.
The measured stapes velocities for equivalent CM output from the two types of input were not significantly different for low and medium frequencies (0.25-4 kHz); however, the velocities for AMEI-RW drive were significantly lower for higher frequencies (4-14 kHz). Thus, for RM stimulation with an AMEI, stapes velocities can underestimate the mechanical input to the cochlea by ~20 dB for frequencies greater than ~4 kHz.
This study confirms that stapes velocity (with the assumption of equivalent stapes velocity for forward and reverse stimulation) cannot be used as a proxy for effective input to the cochlea when it is stimulated in the reverse direction. Future research on application of intraoperative electrophysiological measurements during surgery (CM, compound action potential, or auditory brainstem response) for estimating efficacy and optimizing device coupling and performance is warranted.
The Precedence Effect in Sound Localization Brown, Andrew D.; Stecker, G. Christopher; Tollin, Daniel J.
Journal of the Association for Research in Otolaryngology,
02/2015, Volume:
16, Issue:
1
Journal Article
Peer reviewed
Open access
In ordinary listening environments, acoustic signals reaching the ears directly from real sound sources are followed after a few milliseconds by early reflections arriving from nearby surfaces. Early ...reflections are spectrotemporally similar to their source signals but commonly carry spatial acoustic cues unrelated to the source location. Humans and many other animals, including nonmammalian and even invertebrate animals, are nonetheless able to effectively localize sound sources in such environments, even in the absence of disambiguating visual cues. Robust source localization despite concurrent or nearly concurrent spurious spatial acoustic information is commonly attributed to an assortment of perceptual phenomena collectively termed “the precedence effect,” characterizing the perceptual dominance of spatial information carried by the first-arriving signal. Here, we highlight recent progress and changes in the understanding of the precedence effect and related phenomena.
Objective assessment of spatial and binaural hearing deficits remains a major clinical challenge. The binaural interaction component (BIC) of the auditory brainstem response (ABR) holds promise as a ...non-invasive biomarker for diagnosing such deficits. However, while comparative studies have reported robust BIC in animal models, BIC in humans can sometimes be unreliably evoked even in subjects with normal hearing. Here we explore the hypothesis that the standard methodology for collecting monaural ABRs may not be ideal for electrophysiological assessment of binaural hearing. This study aims to improve ABR BIC measurements by determining more optimal stimuli to evoke it. Building on previous methodology demonstrated to enhance peak amplitude of monaural ABRs, we constructed a series of level-dependent chirp stimuli based on empirically derived latencies of monaural-evoked ABR waves I, IV and the binaural-evoked BIC DN1, the most prominent BIC peak, in a cohort of ten chinchillas. We hypothesized that chirps designed based on BIC DN1 latency would specifically enhance across-frequency temporal synchrony in the afferent inputs leading to the binaural circuits that produce the BIC and would thus produce a larger DN1 than either traditional clicks or chirps designed to optimize monaural ABRs. Compared to clicks, we found that level-specific chirp stimuli evoked significantly greater BIC DN1 amplitudes, and that this effect persisted across all stimulation levels tested. However, we found no significant differences between BICs resulting from chirps created using binaural-evoked BIC DN1 latencies and those using monaural-evoked ABR waves I or IV. These data indicate that existing level-specific, monaural-based chirp stimuli may improve BIC detectability and reduce variability in human BIC measurements.
Sound location in azimuth is signaled by differences in the times of arrival (interaural time difference, ITDs) and the amplitudes (interaural level differences, ILDs) of the stimuli at the ears. ...Psychophysical studies have shown that low- and high-frequency sounds are localized based on ITDs and ILDs, respectively, suggesting that dual mechanisms mediate localization. The anatomical and physiological bases for this “duplex theory” of localization are found in the medial (MSO) and lateral (LSO) superior olives, two of the most peripheral sites in the ascending auditory pathway receiving inputs from both ears. The MSO and LSO are believed to be responsible for the initial encoding of ITDs and ILDs, respectively. Here the author focuses on ILDs as a cue to location and the role of the LSO in encoding ILDs. Evidence from disparate fields of study supports the hypothesis that the LSO is the initial ILD processor in the mammalian auditory system. NEUROSCIENTIST 9(2): 127–143, 2003.
Temporary conductive hearing loss (CHL) can lead to hearing impairments that persist beyond resolution of the CHL. In particular, unilateral CHL leads to deficits in auditory skills that rely on ...binaural input (e.g., spatial hearing). Here, we asked whether single neurons in the auditory midbrain, which integrate acoustic inputs from the two ears, are altered by a temporary CHL. We introduced 6 weeks of unilateral CHL to young adult chinchillas via foam earplug. Following CHL removal and restoration of peripheral input, single-unit recordings from inferior colliculus (ICC) neurons revealed the CHL decreased the efficacy of inhibitory input to the ICC contralateral to the earplug and increased inhibitory input ipsilateral to the earplug, effectively creating a higher proportion of monaural responsive neurons than binaural. Moreover, this resulted in a ∼10 dB shift in the coding of a binaural sound location cue (interaural-level difference, ILD) in ICC neurons relative to controls. The direction of the shift was consistent with a compensation of the altered ILDs due to the CHL. ICC neuron responses carried ∼37% less information about ILDs after CHL than control neurons. Cochlear peripheral-evoked responses confirmed that the CHL did not induce damage to the auditory periphery. We find that a temporary CHL altered auditory midbrain neurons by shifting binaural responses to ILD acoustic cues, suggesting a compensatory form of plasticity occurring by at least the level of the auditory midbrain, the ICC.
In mammals, localization of sound sources in azimuth depends on sensitivity to interaural differences in sound timing (ITD) and level (ILD). Paradoxically, while typical ILD-sensitive neurons of the ...auditory brainstem require millisecond synchrony of excitatory and inhibitory inputs for the encoding of ILDs, human and animal behavioral ILD sensitivity is robust to temporal stimulus degradations (e.g., interaural decorrelation due to reverberation), or, in humans, bilateral clinical device processing. Here we demonstrate that behavioral ILD sensitivity is only modestly degraded with even complete decorrelation of left- and right-ear signals, suggesting the existence of a highly integrative ILD-coding mechanism. Correspondingly, we find that a majority of auditory midbrain neurons in the central nucleus of the inferior colliculus (of chinchilla) effectively encode ILDs despite complete decorrelation of left- and right-ear signals. We show that such responses can be accounted for by relatively long windows of bilateral excitatory-inhibitory interaction, which we explicitly measure using trains of narrowband clicks. Neural and behavioral data are compared with the outputs of a simple model of ILD processing with a single free parameter, the duration of excitatory-inhibitory interaction. Behavioral, neural, and modeling data collectively suggest that ILD sensitivity depends on binaural integration of excitation and inhibition within a ≳3 ms temporal window, significantly longer than observed in lower brainstem neurons. This relatively slow integration potentiates a unique role for the ILD system in spatial hearing that may be of particular importance when informative ITD cues are unavailable.
In mammalian hearing, interaural differences in the timing (ITD) and level (ILD) of impinging sounds carry critical information about source location. However, natural sounds are often decorrelated between the ears by reverberation and background noise, degrading the fidelity of both ITD and ILD cues. Here we demonstrate that behavioral ILD sensitivity (in humans) and neural ILD sensitivity (in single neurons of the chinchilla auditory midbrain) remain robust under stimulus conditions that render ITD cues undetectable. This result can be explained by "slow" temporal integration arising from several-millisecond-long windows of excitatory-inhibitory interaction evident in midbrain, but not brainstem, neurons. Such integrative coding can account for the preservation of ILD sensitivity despite even extreme temporal degradations in ecological acoustic stimuli.