Adaptation to visual motion can induce marked distortions of the perceived spatial location of subsequently viewed stationary objects. These positional shifts are direction specific and exhibit tuning for the speed of the adapting stimulus. In this study, we sought to establish whether comparable motion-induced distortions of space can be induced in the auditory domain. Using individually measured head related transfer functions (HRTFs) we created auditory stimuli that moved either leftward or rightward in the horizontal plane. Participants adapted to unidirectional auditory motion presented at a range of speeds and then judged the spatial location of a brief stationary test stimulus. All participants displayed direction-dependent and speed-tuned shifts in perceived auditory position relative to a 'no adaptation' baseline measure. To permit direct comparison between effects in different sensory domains, measurements of visual motion-induced distortions of perceived position were also made using stimuli equated in positional sensitivity for each participant. Both the overall magnitude of the observed positional shifts, and the nature of their tuning with respect to adaptor speed were similar in each case. A third experiment was carried out where participants adapted to visual motion prior to making auditory position judgements. Similar to the previous experiments, shifts in the direction opposite to that of the adapting motion were observed. These results add to a growing body of evidence suggesting that the neural mechanisms that encode visual and auditory motion are more similar than previously thought.
The bone-anchored-hearing-aid (BAHA) transduces airborne sound into skull vibration. Current bilateral BAHA configurations, for sounds directly facing listeners, will apply forces that are in-phase with each other and directed roughly towards the center of the head. Below approximately 1000 Hz the two cochleae respond in approximately the same direction and with approximately the same phase to each BAHA, thus it may be preferable to drive bilateral BAHAs such that when one pushes, the other pulls. This can be achieved by adjusting the relative phase offset of the BAHAs, and doing so results in greater vibration and improved hearing threshold. In this paper we compare performance of bilateral BAHAs driven in this configuration to the standard configuration. In twelve normal participants we show significant improvements in low-frequency (≤750 Hz) hearing thresholds using out-of-phase BAHAs. The threshold measurements are further supported by velocimetric measurements taken at the cochlear promontory in a cadaveric head. Comparing vibration arising from each configuration confirms that out-of-phase driving results in greater vibration. Neither dataset shows either improved or reduced threshold at high frequencies.
The role of the tensor tympani (TT) muscle of the middle ear is not well understood, and there is a long history of the implied, but unproven, part it plays in various inner ear disorders, particularly Meniere’s disease. In order to gain an improved understanding of the effect of TT contraction, a lumped parameter mechanical model of the middle ear including the TT has been developed. This model uses a previously developed lumped parameter model of the middle ear ossicular chain along with experimentally obtained visco-elastic material model for the TT in order to predict the changes in the acoustic impedance of the middle ear experienced when the TT is contracted. Qualitatively, the results of the computer model agree quite well to similar laser Doppler vibrometer measurements from various cadaveric temporal bones where TT contraction has been simulated using force loading.
To localize a sound, the auditory system uses multiple cues, including binaural differences in timing and level that arise from the separation of the ears by the solid mass of the head. It has repeatedly been shown that the ability to utilize these cues is plastic and experience-based. Vibrotactile input shares many common features with auditory signals, and there is some overlap between the frequency range of the sensitivity of the ear and skin. In this study, we examine whether the auditory system is capable of combining auditory and tactile inputs to localize sounds using a multi-speaker array. To induce deficits in azimuthal localization, one ear was plugged. To examine cross-modal localization, the input level to the plugged ear was recorded via microphone, and a vibratory signal that was perceptually equal in intensity was presented to the shoulder on the same side as the plugged ear. The participant’s ability to localize low-pass, band-pass, high-pass, and broadband sounds was measured. Results showed that relative to baseline (plugged) conditions, localization performance improved, suggesting that listeners can combine auditory and tactile information to create a sense of auditory space.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.