The brain combines sounds from the two ears, but what is the algorithm used to achieve this summation of signals? Here we combine psychophysical amplitude modulation discrimination and steady-state electroencephalography (EEG) data to investigate the architecture of binaural combination for amplitude-modulated tones. Discrimination thresholds followed a 'dipper' shaped function of pedestal modulation depth, and were consistently lower for binaural than monaural presentation of modulated tones. The EEG responses were greater for binaural than monaural presentation of modulated tones, and when a masker was presented to one ear, it produced only weak suppression of the response to a signal presented to the other ear. Both data sets were well-fit by a computational model originally derived for visual signal combination, but with suppression between the two channels (ears) being much weaker than in binocular vision. We suggest that the distinct ecological constraints on vision and hearing can explain this difference, if it is assumed that the brain avoids over-representing sensory signals originating from a single object. These findings position our understanding of binaural summation in a broader context of work on sensory signal combination in the brain, and delineate the similarities and differences between vision and hearing.
18The brain combines sounds from the two ears, but what is the algorithm used to achieve 19 this summation of signals? Here we combine psychophysical amplitude modulation 20 discrimination and steady-state electroencephalography (EEG) data to investigate the 21 architecture of binaural combination for amplitude-modulated tones. Discrimination 22 thresholds followed a 'dipper' shaped function of pedestal modulation depth, and were 23 consistently lower for binaural than monaural presentation of modulated tones. The EEG 24 responses were greater for binaural than monaural presentation of modulated tones, and 25 when a masker was presented to one ear, it produced only weak suppression of the 26 response to a signal presented to the other ear. Both data sets were well-fit by a 27 computational model originally derived for visual signal combination, but with suppression 28 between the two channels (ears) being much weaker than in binocular vision. We suggest 29 that the distinct ecological constraints on vision and hearing can explain this difference, if it 30 is assumed that the brain avoids over-representing sensory signals originating from a single 31 object. These findings position our understanding of binaural summation in a broader 32 context of work on sensory signal combination in the brain, and delineate the similarities 33 and differences between vision and hearing. 34 35
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.