2014
DOI: 10.1007/s00422-014-0588-4
|View full text |Cite
|
Sign up to set email alerts
|

An ideal-observer model of human sound localization

Abstract: In recent years, a great deal of research within the field of sound localization has been aimed at finding the acoustic cues that human listeners use to localize sounds and understanding the mechanisms by which they process these cues. In this paper, we propose a complementary approach by constructing an ideal-observer model, by which we mean a model that performs optimal information processing within a Bayesian context. The model considers all available spatial information contained within the acoustic signal… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
56
0

Year Published

2014
2014
2024
2024

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 27 publications
(56 citation statements)
references
References 32 publications
0
56
0
Order By: Relevance
“…Depending on the frequency content of the sound and the egocentric location of its source, each of these cues alone may be spatially ambiguous, leading to localization errors (Blauert, 1997; Shinn-Cunningham et al, 2000; King et al, 2001). In line with work on Bayesian integration of both multisensory (e.g., Ernst and Banks, 2002; Alais and Burr, 2004; Körding et al, 2007; Rowland et al, 2007; Targher et al, 2012; Hollensteiner et al, 2015; Zhou et al, 2018) and unisensory (e.g., Landy and Kojima, 2001; Knill and Saunders, 2003; Hillis et al, 2004; Watt et al, 2005; Sturz and Bodily, 2010) spatial cues, the localizing brain can be usefully compared with an ideal observer, integrating sound localization cues to resolve ambiguities by weighting each cue according its relative reliability (Reijniers et al, 2014).…”
Section: Introductionmentioning
confidence: 99%
“…Depending on the frequency content of the sound and the egocentric location of its source, each of these cues alone may be spatially ambiguous, leading to localization errors (Blauert, 1997; Shinn-Cunningham et al, 2000; King et al, 2001). In line with work on Bayesian integration of both multisensory (e.g., Ernst and Banks, 2002; Alais and Burr, 2004; Körding et al, 2007; Rowland et al, 2007; Targher et al, 2012; Hollensteiner et al, 2015; Zhou et al, 2018) and unisensory (e.g., Landy and Kojima, 2001; Knill and Saunders, 2003; Hillis et al, 2004; Watt et al, 2005; Sturz and Bodily, 2010) spatial cues, the localizing brain can be usefully compared with an ideal observer, integrating sound localization cues to resolve ambiguities by weighting each cue according its relative reliability (Reijniers et al, 2014).…”
Section: Introductionmentioning
confidence: 99%
“…The frequency range for the generic composite feature-based approach is selected empirically, where [0,4] kHz and [3,5] kHz are the phase and magnitude feature regions for the feature-based method, while the full-band signal is used for the correlation approach. During the comparison, the mean angular error is employed as a metric to assess the localization performance.…”
Section: Simulation Configurationmentioning
confidence: 99%
“…To adapt those devices for great spatial experiences, it is necessary to find the most valuable acoustic cues for human hearing system [1][2][3][4]. Additionally, testing and evaluating the quality of reproduced sound field and the hearing experience of those devices is another challenging problem [5][6][7]. The hearing tests with real human volunteers can be time-consuming and would be unfeasible for a huge amount of devices and testing scenarios.…”
Section: Introductionmentioning
confidence: 99%
“…There is a significant improvement in speech comprehension when distracters are in the opposite hemifield as opposed to the same hemifield (Hawley et al 1999). Although an opposite hemifield configuration would provide more localization information because of the position of the ears (e.g., Reijniers et al 2014), this advantage may be enhanced by the ascending auditory system: presentation in opposing hemifields would lead to preferential processing in the contralateral ICs. We would predict that there is a substantial, qualitative, difference between processing of stimuli at midline (where the signal would be equally represented in both ICs) and stimuli presented away from midline (where the signal would be represented primarily in one IC).…”
Section: Monaural and Binaural Processing Of Harmonic Complexesmentioning
confidence: 99%