The correct percentage of motion direction for one sound event was about 90%. In the case of two sound events, the correct percentage was lower than for one sound event. The sound event was presented through the headphone in these experiments. These two sound events were physically separated both in frequency and in onset time for the two sound events. It might be possible to perceive two sound events separately. The perception of onset time difference and the discrimination of different frequency sound events were investigated to clarify whether the subjects perceived two sound events distinctly. When IEOI (interevent onset interval) was more than 40 ms, the subjects perceived the onset time difference and the order of two presented sound events. Results clarified that the subjects distinctly perceive two sound events separately with long IEOI. However, the correct percentage for the direction of motion perception of the target in two sound events was significantly lower than for one sound event. The target was strongly affected by the nontarget. Even when two sound events are perceived separately in terms of frequency and temporal order, it is difficult to perceive the motion direction of the target.
The present study examined the dynamic properties of the across-frequency integration mechanism, specifically the extent to which the information about the direction of changes in the interaural-time difference (ITD) is integrated or compared across frequencies. The stimulus was a complex tone consisting of two sinusoidal carriers, one at 400 and the other at 700 Hz. A sinusoidal modulation in the ITD was imposed on one carrier alone or the two carriers simultaneously. The ITD of each carrier was centered at 0 ms, and the modulation started and ended with the zero phase. ITD modulations, when imposed on the two carriers simultaneously, were in-phase or anti-phase between them. Experiment 1 measured the threshold modulation depth for detecting the modulation with an adaptive method. The thresholds were generally lower when both carriers were modulated than when only one was, indicating across-frequency integration of the information about the presence of modulation. The threshold, however, was not significantly different between the in-phase and antiphase conditions, even when the modulation rate was as low as 1 Hz. Experiment 2 measured the discriminability between in-phase and anti-phase modulations. Modulation depth was fixed at a suprathreshold value (600 ms). The performance varied largely among the listeners, and it was near the chance level for half of listeners even for a 1-Hz rate. The study failed to present compelling evidence that the auditory system is sensitive to the relative phase of ITD modulations for the conditions tested. This suggests that the directional information of even slow ($1 Hz) ITD modulation is not combined effectively across frequencies, at least for the conditions tested.
Most psychoacoustics studies on sound localization and motion perception have been conducted for only one sound event. In this report, one- or two-sound events were presented and the direction and motion perception were investigated using two-sound phenomena to produce motion perception; the first was apparent motion and the second was a synthesized sound image. There were no significant differences in the correct response percentage for both two-sound phenomena. The correct response percentage for the direction of motion was significantly worse for two-sound events than that for one-sound event, even if the two-sound events are separately perceived in terms of frequency, space domain, and temporal order. The analysis of wrong responses showed that the perception depends on IEOIs and the combination of target and non-target moving conditions (e.g., moving or still). Furthermore, it was clarified with signal detection theory that the motion perception of target was strongly affected by the non-target conditions. It means that the process of perceiving the direction and motion of the target might be different than that of perceiving the sound feature.
No abstract
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.